Imitating success often makes sense; after all, we don't want to imitate failure. But your company is almost certainly not facing issues at all like Amazon, Facebook, or Google, and also does not have nearly as many programmers to do it.
Another good example are monorepos. A decision maker sees a headline about Google and Facebook using monorepos and mandates his company to switch. Unfortunately, he or she didn't read or didn't understand the actual article that explains that it only works because of strict code reviews and extensive integration tests.
Another hype that keeps coming back is the magic of trunk based development (with a random po reading some flashy article about feature switches as the wonderous solution for a-b testing AND faster development).
Nowdays I even consider putting react and angular into this pack, since, you know, "if it's good enough for facebook/google then it must be really good" - anyone ever tried to increase performance for a react site (and realizing that precompiling templates was not exactly a bad idea years ago) or hitting the wall with inline async helpers and rx magic might know my voes. But then again, give me a fast and reliable server rendered site over an unsteable trainwreck spa anytime and I will be happy.
I'm not sure I'm parsing this sentence correctly. Are you saying that precompiling templates and rehydrating them doesn't cut it anymore? If so, why not? I haven't used React much, but I've done some work in a framework with similar principles and I felt like proper precompiling, with basic HTML fallbacks cooked in where possible, provided all the performance of server rendered sites with the added bells and whistles of SPAs (including that most subsequent requests can load quicker than a full rerender, provided JS is present).
React and fiber tries to be smart about rendering with tree section diffing, but unless you use immutable, it's "not enough" to rely on - without immutable even in a smaller redux app there is a good chance that you have unintentional rerenderings triggered, which while may not create new tree branch renderings, still need to be evaluated.
This of course applies to the client, I don't have experience with nextjs or similar tools.
Unfortunately he didn't know that `git` does not scale well with the size of the repo and Google have built their custom repository system to tackle this.
I think labor market incentives distort a lot of engineering decision-making especially at smaller companies, where the boss heard that React is hot and decides Solution X should be built with React.
Developers eager to develop their skills, increase their rates and advance in seniority end up focusing on the new hotness because of this.
Engineers who want to increase their impact and scope should focus first and foremost on the skill of making smart tradeoffs between technical realities and the requirements of other business functions--aside from simply writing code, this is what engineering is indispensable for.
Beyond that it pays to go deep in your mastery of a particular platform, the dependencies underlying it, and related technologies--all of these enhance your ability to design solutions.
Unfortunately the reality is there are very real financial incentives to keep jumping to the new hot thing every year.
I have recently started contracting in London, and this is 100% what I have found in my younger colleagues. Trying to do everything "the right way" without keeping in mind at all the business constraints has created a total reciprocal distrust with upper management - which in turn isn't at all able to explain how being able to do that makes you a much better developer for the real world, not for some fantasy world where time and resources are infinite.
* Ex-engineer managers pointing out that technology, then either brown-nosing or incompetent engineers running with it
* Tech debt: throw-away prototypes / experiments forced to production products
* Engineers padding their technical resume (probably intending to transfer before the debt is due)
* Engineers reusing existing knowledge instead of learning
* Premature engineering, small teams building for a big (hypothetical) future instead of building everything that comes in between
* Consultants / Contractors
* Low hiring bar
And plenty more. It's usually a systemic failure, with multiple parties failing to do what's best for the the customers and the business.
Usually I see the opposite. Engineers jumping on hot technology of the day (and making a half arsed job of it) rather than getting to know any one technology well.
It's worse than lazy, it's the new way of doing engineering that is glueing together ready made components.
I agree with the sibling comment of 'dcow'.
I will add a) innovation is a slow process and b) that a lot of projects take the _fast path_ of "fill-in-the-blanks" tools instead of thinking about a good solution.
To avoid speaking in vaccum, I will only cite 'Ansible' which is a tiny improvement over the previous hype. Obviously, if you consider the whole 'Ansible + Galaxy' there is value but still the wrong solution. All arguments I've seen for Ansible seem wrong to me.
AFAICT that time would have been better spent in functional devops approaches. My favorite being 'guix'. Also, I understand that Ansible galaxy was built by thousands of man hours by hundreds of contributors where a functional approach will require prolly more time and more focused effort.
Many cannot get to the grips that majority of work is just plumbing, yet another CRUD app or just plain maintenance.
So it is cooler to keep pushing for things like those companies, even if there is no need, after all the cv needs to match the new buzzword soup filter from HR.
Then, I realized what was going on: Apple had introduced their app store a few years before, and it was a big deal. Steve Jobs was the CEO on the cover of every magazine. The CEO looked at Apple, and said, "we must have an app store", and that was all there was to it, even though Apple sold several orders of magnitude more units, making the concept of an app store way more suitable for them than for the company I was working for.
I tried to make this point at the last place because they were insisting that because Amazon et al used microservices, they were obviously the right way. They were planning on 19 microservices to replace the monolith with a dev team of ... 3 perms and 2 contractors.
I thought the main benefit of a microservice architecture is to break up an application into smaller services that are about the size that a small development team can build and maintain on their own. I'm not experienced in this area, but more than one microservice per developer seems like a major "architecture smell."
Somehow this got mangled into "microservices are good because they help teams work together" which isn't what it's about at all. At least at Google the smallest hello world server was something like 50mb when I worked there, just because of the huge dependency graph. It was common for servers to pull in hundreds of modules maintained by nearly as many different teams, all in a single binary. Each server was essentially a "monolith" except for the stuff that had to run in separate backends because of resource constraints.
Yeah, it's normally that you need N developers per microservice (I've seen N=5 recommended, for example.)
It's not a smell; it's a hot, steaming turd.
I really like MobX ATM. It's a lot easier to explain to new people as well. Redux is like... too much...
- It’s a source of thread-safety bugs. With multiple cores even in phones and async programming rising this has become a bigger issue.
- It’s a source of general program logic bugs.
I wouldn’t say that it has come all of a sudden. Like other modern techniques it has been standard in functional programming for years. In mainstream programming languages it’s also not totally new, for instance immutable strings were a design decision in java.
There are very few established businesses which can 100% squeeze themselves into the constrains of serverless architecture.
Nor the underlying infrastructure, such as, for example, physical server hardware that's a fraction of the cost of cloud.
How often is this borne of ignorance, though?
> in the past there were a lot of story's about companies that needed to rebuild there system
More specifically, are investors merely remembering the struggles of the dot-com boom, when transistor density was one thousandth that of today?
Internet usage (number of users and number of services each one uses) has grown, too, but I haven't seen statistics that suggest this would be more than 100x of 20 years ago.
> Although it makes sense to ignore scaling at the beginning you need to design your application in a way that you can easily transition to something that can scale.
This presents a false dichotomy. One doesn't have to ignore scaling at the beginning to make the decision to trade initial time-to-market for eventual "horizontal" scalability (usually what is meant by "can scale" ), but that may be what happens in practice.
 even though one can buy a remarkably large single server 3-6x price premiums over mid-range single servers, as well as use other, more traditional techniques, before "needing" to rebuild the entire system.
That's a situation fairly far removed from a VC refusing to invest because the company isn't imitating a FAANG's architecture.
Probably no even if it's 50x that at 1TB/year, unless it's in very small chunks and/or requires an outsized amount of (possibly realtime) processing.
Current commodity servers will scale up to 12TiB of main memory, and if you're willing to use previous generation CPUs, those servers can twice the RAM (24TiB). Under half a megabuck without storage.
Not the same thing, but the psychology is related.
Ref: Michael Crighton; Gell-Mann
Even Twitter, probably the most successful performant microservice-at-scale company advocates waiting as long as absolutely possible to make microservices. They said "It fixes one problem, and makes every other piece of application development significantly harder."
I'm in a small engineering org, and I split out a bunch of security critical code from an unsecure monolith and moved that into a microservice, running in a very tightly controlled and audited environment. A tiny part of my runtime needed different security from the rest of it, and a microservice was the easiest way to accomplish it. Now, I've traded a security problem for a latency problem, as everything that used to be handled internally is now an RPC - though this is an easy problem to solve.
A/B testing at deployment fir small services
Less dependency to software specific stack
Strict programming by interface
Multiple languages: the shared infra in this case is your Kubernetes yaml files and Docker build files, both of which can be shared easily. The rest is either RESTful or Kafka consume/produce. Python/R/Scala/Java/C#/Haskell/OCaml can all interface with that.
Feature flags are a possible solution. I worked with them in the past and, applied correctly, they can offer a similar experience. However, one often needs more than just a boolean filter: you need to have the logic in your application to route the requests to, say, two different implementations of the same interface. Do this on the micro-service scale, and you get a nice SoC at the proxy level. It moves A/B testing to be part of the infra, not the application logic.
More dependency to homegrown immature solutions: care to elaborate?
Isolated half-assed solutions with few people responsible: care to elaborate? My little micro-service component needs the bare minimum of access, which I can precisely provide to it. No memory is shared, no storage is shared... Often it is much easier to prove that the system stays within confidentiality and availability limits. Networking allows me to transparently monitor all data flowing through my services.
Strict programming by interface vs. static typing. Oh yes, I totally agree with static typing! Such a big advantage over dynamic typing, read my other comments on HN. However, there is no static typing across versions of your same software. Forward compatibility is hard to achieve when all components need to be upgraded at the same time. I still dread the days when we would upgrade 'a major version' and all kinds of database tables had to be changed, causing chicken-and-eggs problems. Not saying that this problem is completely eliminated with a micro-service architecture, but it forces developers to think what can be reasonable isolated, causing a higher SoC. It also prevents the humongous unmaintainable 300+ tables large RDBMSs, which are often the primary cause of stagnated development.
Failure isolation: I don't understand your reasoning, sorry.
They just stick to MVC and their preferred language.
This is perhaps the best bang for your buck until you actually know what the bottlenecks are as it otherwise becomes a brute-force approach to "scale everything individually". Unfortunately "our code is ready to scale" doesn't quite sound as cool as "we run a thousand micro-services".
Personally I'd say delay until you can find an area causing such a performance or scalability issue that it justifies its own repo(s), build pipelines / devops / ops, deployable artifacts, and team to hold the context on it.
Without that, your profitability or engineering team is best invested into extending your offering to better service your customers.
The more things change, the more they stay the same.
I much prefer horizontally scaling a big old monolith backed by a sharding database like CockroachDb or Cassandra. Same scalability and you get to keep some ACID semantics.
Concerns about code size are overblown. Facebook's mobile app is utterly massive and it still runs mostly okay even on weak devices. The maximum practical code size for a monolithic server side app will probably never be reached. We're talking maybe 100 million LoC before you run into real problems, especially if you use a VM lanuage like Java or C# where hot and cold code can be dynamically swapped out and code is stored in a very space-efficient fashion
When you reach the scale where the codebase size is an issue you've probably already several rewrites to deal with such massive traffic volume.
I agree with your comments but this is a bad example. The Facebook app is horrible to use on high end devices. The mobile website is much more performant and has most of the core functionality.
The app is actually worse in one way - they chose to use the iOS WebView that doesn’t support the native ad blocking framework. So when you click on an external page, it’s usually horrible - bacause modern web.
The fact is that microservice tooling is in it's infancy, and shouldn't be used in production unless you're willing to roll your own everything. I worked at a place that tried to use microservices+event sourcing+CQRS, predictably a massive disaster.
I still think the monolith+distributed database will win out . I've never heard of a time when horizontal scaling was a problem not related to the databse
In other words: Creating or migrating to a micro service architecture is very expensive but once it’s in place, adding a new micro service is trivial. In Zalando the goal is that you can do so within half an hour, that is take an idea, implement it and deploy it to production.
This leads to architectural decisions no sane person would make at a smaller company. For example let’s say you get data from an external company via SFTP and you make it internally available, so what do you do?
1. You create a micro service that polls the SFTP of the supplier for new files and sticks them into an S3 bucket
2. You create a micro service that takes the files from S3, parses, transforms into JSON and publishes events to Kafka.
3. You create a micro service that takes those events enriches them with some other data and republishes new events.
Especially when you use Kubernetes, you also start thinking about infrastructure completely differently. Why use multiple threads or processes for example, when you can just run multiple instances of your application very cheaply and temporarily run further instances to handle background tasks?
Events go through Nakadi, which enforces a schema and authorization. You can further tell which services are subscribing to such events and through the before mentioned application registry who is responsible for them.
Additionally compliance with various laws and shareholder expectations requires regular reviews, to identify and fix issues such as missing documentation, monitoring, SLOs, data storage restoration tests (in production), load tests etc.
The scenario you describe is not possible or allowed. As awareness and adherence to these rules is also part of performance reviews, it’s also not in any engineers in interest to do that.
You’re right of course that one needs to be aware of that.
I think, ideally, for a system to be successful, you need to understand all places that you either build up messages, threads, buffers, whatever. It ultimately doesn't matter which you do. Just don't do too many. And don't make migrating a solution from one type to another part of your critical path to launch. Logging, in particular, is something I've grown tired of people reinventing before they have even launched.
I inherited a bunch of micro services at my current job with a fairly small team. My feelings have been that the code had been prematurely micro-serviced considering the team is so small and every service was mainly just CRUD in front of then same Postgres instance.
I’ve slowly been demonstrating the benefit of modularlized “sub-apps” in a single monolith over lots of microservices that all reinvent the wheel. And I think I’ve convinced them that this is easier going forward. But I’m at a loss sometimes about what boundaries I should be putting in place such that we don’t end up with a ball of mud in two years.
Docker is, too: https://github.com/docker/docker-ce
It isn't one codebase and when deployed there are multiple daemons running.
Speaking about Django, the framework provide core features (HTTP handling, database connections, ORM, templating...) that apps (what you call "sub-apps") can leverage. A Django project is just a collection of apps, either yours or third-party.
You'll always get some portion of the community hating whatever's on top. Even in 2018, if your startup isn't using one of these two, you'd better have a damn good reason. Rails, and to a lesser extent Django, are the absolutely boring but productive as fuck workhorses of any startup. Bypass them at your peril when your project passes in front of competent due diligence.
Also, my personal site is a medium-sized monolith:
Linux isn't a monolith. And it isn't a monorepo either since Linux comprises far more than just the kernel.
"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. "
( https://en.wikiquote.org/wiki/Linus_Torvalds#1991-94 )
This sounds a bit less like a monolith and a bit more like a bunch of microservices that you've welded together in one container. Which is great! If you're disciplined about maintaining that, it will be very easy to spin off these individual modules as their own microservices when the time comes.
But I'm not sure that this constitutes an argument for monoliths > microservices.
Worse, often the complexity they bring is undocumented because it lies in the relationships between applications, accounts, infrastructure, and so on, and does not lie within the boundary of a single application. Microservices are simpler to write because you can pretend a lot of the ecosystem does not exist.
Any system architecture needs to consider the specific case, there’s no best pattern for everything.
I typically see people try to push microservices as a "best practice" for improving the delivery rate of software teams. They carve the codebase into separate services, but still leave all of the persistence in place so that all services are communicating with the same DB(s). The result is a tangled web of interdependent services, plus additional tooling like Docker, Kubernetes, etc. that actually makes the teams even slower than they were before.
I'm glad to see us as a community getting more pragmatic about these topics, and realizing that _best practices_ are highly context dependent!
Break it up and optimise where the sticking points are as customer base and feature set matures.
You cannot survive microservices without real-time dashboards and proper logging to understand the health of your environment, whereas with a monolith, it was fairly obvious because there's the server and usually a database, and that's it.
I do the same thing with the components of the monolith. I'll use Pyramid + SQLAlchemy + WTForms all day long with a reasonable separation of concerns. When I hit a real problem where the ORM is causing a bottleneck, hey, that's a good time to drop the ORM and use stored procs.
Same thing with almost any element of the web app. When it's a clear bottleneck, drop one layer of abstraction and move on.
This will get you excellent performance without the overhead of an SPA for 99% of your use cases.
Design your way around page refreshes, and judiciously apply ajax calls, and you're done.
If you aren't one of the big 5, this is good enough, even with a slow language like Python.
It's definitely not black/white, but a spectrum.
The fact is that "monolith" applications can scale, and they can be deployed as a unit and they can still adjust to varying load using modern cloud infrastructure.
The enemy to scale, productivity and success isn't "monoliths". It's complexity! And an experienced and pragmatic developer will tell you that they fight daily to avoid complexity at every turn. Yes, sometimes it's necessary, but they weigh the pros and cons on a case-by-case basis to ensure that the gradient to complexity is always as low as possible.
It is caused by bringing together unrelated things.
It is caused by violating the single responsibility principle.
So, we isolate state from communication. We separate interface from implementation. We see as many, where others see one. We try to see the network of dependencies, where others see a single line of code.
Using this new perception, we find that the large application entangles the change of (code) state of one part with every other through its deployment. The static typing guarantees only validate one specific instant in time, but nothing about the future or the past.
So, take a step back and see the deployment of code as a functional part of the code. Now the monolith forms complexity and the microservice isolates it to the minimal surface.
Take it one step further, and we see that the actor model is the exponent of this idea. Late binding, late dependency tracking, programming by behavioural contract.
It is also caused by wrongly separating things that are closely related and putting a network or something similar in between.
You cannot reduce software development to a formula.
Microservices are a siren song for many small/medium sized companies. What they need is clean separation between subdomains (composing domain level API functionality together in the user level APIs to implement business logic) and clean separation between domain and infrastructure code. There are a lot less complex ways to achieve this without microservices.
Isn't this already an application composed of four micro-services: three micro-services are 3rd-party, while one is developed in-house?
Many of us have worked on dozens of these before and sure they work but microservices were invented specifically to address the serious concerns with these patterns.
1) It's far too easy to create spaghetti code that spreads across multiple modules. And because they are so intertwined teams are very reluctant to do major refactors. Especially when the original authors eventually leave.
2) It's less flexible at scaling. In a monolith you have to horizontally scale the entire application behind some load balancer. So if you have an Email concern that needs to handle 100x more load what do you do ?
3) API contracts are far easier and safer to evolve than ESB ones. There are also clean mechanisms in APIs for handling versioning, capability discovery e.g. HATEOS, documentation e.g. Swagger, rate limiting, caching, analytics, billing, authentication etc.
4) Microservices are much easier to reason about, test, document, teach new starters about and most importantly replace. If I replace a microservice I just need to verify that the API contract is the same. In a monolith I need to basically retest everything since there is no guaranteed isolation.
I could keep going on. But not sure this blog post offers anything particularly insightful in how to deal with the negatives of a monolith architecture.
Do you honestly believe this drivel?
Take the 'much easier to reason about'.
A monolith is simple to reason about. You can literally see the whole call stack in your favourite debugger.
It's so simple to reason about, yes? Do you understand how? How a simple breakpoint gives you the whole program in a monolith.
That's literally impossible in a micro-service setup. With a microservice setup even the most trivial of bugs become an utter nightmare to debug.
Alternative view: It's not.
I get your point but with a well designed microservice it's pretty simple as well. You just need proper logging in production and a good development setup.
Complex issues involving multiple services are a lot harder, I grant you.
Microservices are not about scaling the runtime, they’re about scaling the engineering organization and deployment processes so that teams can iterate and ship independently. With thousands of engineers committing to the same artifact, the probability that at least one commit is bad (blocking rollout of all others) approaches 1.
And scaling different parts of the application is just one of many benefits of microservices.
I realize this was a typo but first thought it was an awesomely-named serialization library :)
Unfortunately there is just no way to enforce discipline because monoliths sometimes demand unintuitive behaviour. Often it is better to cut/paste code than it is to link from one module to another. And don't forget that when a deadline hits developer are often forced to do things that aren't always architecturally sound.
Microservices forces isolation between modules.
In .NET you could compile individual assemblies separately and use technical measures to prevent developers from one component from touching code in the other component. The external surface of the assembly would be its API, only no network traffic is required. Does this give us enough discipline, in your opinion?
Micro service surely put the dream of code sharing to REST. Not sure it is a good thing.
To clarify. I don't think microservices are a bad thing, but monoliths are many (most) of the times perfectly good.
Also, I think the middle ground is the better option. You can have services but they don't need to be micro.
Lastly Don’t blindly believe some blog article - they are likely written by unemployed folks with too much time on their hands (possibly using the article to find their next gig)
1. Large-scale refactoring is easier when you have a single build artifact that you can test and deploy. Refactoring a microservice architecture can be significantly harder if that refactor crosses microservice boundaries (say, if you're repeatedly doing the same thing across multiple microservices and want to extract that behavior). Microservices do make it easier to do refactors that don't cross service boundaries, but that assumes that you chose good service boundaries in the first place, and if you can do that, you can also modularize your monolith well enough to make those small-scale refactors easy, too.
2. While this is a valid point, not only do you not have to go to microservices to address it, but using microservices naively can get you worse results. Email isn't a synchronous task, and you should be dropping it on a message queue for an offline worker to pick up anyway.
For different feature concerns that are online and need to happen synchronously with requests, it can be handy to route different endpoints to different groups of servers that are scaled and optimized independently of each other. But that doesn't actually require microservices; you can just deploy a monolith that way.
3. Most of those mechanisms only exist because coordinating behavior over a distributed system is inherently harder. When you're calling other code in the same process, you have much firmer guarantees about latency and availability. API contracts within a service don't have to be monitored, metered, or rate limited the way service endpoints do. You don't have to worry about serialization and deserialization--not even from a performance standpoint, but from a reliability standpoint. And you don't have to worry as much about input validation if you're using a strongly typed language.
4. A microservice is easier to test. A full product that is composed of microservices is harder to test.
Testing a microservice architecture entails building as many of the microservices as are necessary for a specific piece of functionality, deploying all of them to a shared test environment (or a set of shared test environments that are configured to interoperate), and configuring the microservices to communicate with each other within that test environment. And it's very unlikely that you can test a single feature across your microservice architecture in isolation unless you have at least as many separate, isolated test environments as you have features under active development. Additionally, since these test environments are expensive, they tend to be long-lived and accumulate various operational issues. Functionality that, within a monolith, could be tested within an isolated, repeatable build process, gets punted out to these dirty and unpredictable test environments in a microservice architecture.
> Microservices are complicated to develop
> Microservices dependencies are difficult
Well, if you say so! Kind of hard to take this article seriously when the argument boils down to a tautology.
Microservices may be hard! This article contributes nothing to that conversation, though.
I agree, but later on, when the team and the codebase grows, you'll need to split it into smaller parts, and microservices (or any other similar architecture) gives you some guidance that can be shared across the team/s making architecture decisions more consistent and providing a common framework that improve reusability, but it's not an easy path!
Look, my microservice is clean and nice and has a 100% test coverage, I couldn't care less if you can't communicate with it from yours. Solve it somehow.
Now get off my lawn and let me rewrite the whole thing in scala.