Once fully "micro serviced" http://yobriefca.se/blog/2013/04/29/micro-service-architectu... you can replace/evolve one service at the time, instead of more costly and risky whole applications at the time. A natural pattern to evolve those chosen services is strangler application pattern also described by Fowler http://martinfowler.com/bliki/StranglerApplication.html
Systems like mesos, flynn and the like will substantially make it more effective at orchestrating these microservice infrastructures. These allow developers to focus more on the service, rather than on the infrastructure and underlying dependencies.
I'm in the midst of re-architecting a legacy system whose terribleness is legendary even in Hell.
Given that this system has a fairly small set of nouns as well as a limited set of verbs that can apply, I opted to try and abstract out each of these into their own services. This allows a rolling replace/upgrade cycle that allows the old cruft to continue running while limiting the scope of efforts to something less than the Aegean Stables.
One characteristic of the beshitted legacy system is all manner of action-at-a-distance and an embarrassing lack of code reuse so even minor changes in business logic involves a nightmare of grepping and hoping you found all the areas that need to be changed to either support that logic or implement it. To that end I opted to have a message broker for create/update/delete operations that was responsible for distributing such events to other services as business logic dictated. Internally we nicknamed it "Sorta SOA".
As an example work flow, a user is created and publishes the event to the broker. It then can either treat response as a pubsub, acknowledged-level message, or an RPC style message.
Broker gets message and generates a global transaction ID that can be used to trace all further emitted messages as well as handling final response to the originating service. It ack-responds to the originator with the ID. It then has a logic chain based on the name of the event message and can make calls to other services such as the communication service that may email or SMS message someone. Comm service acks upon successful receipt of message, then on delivery responds with any resultset. Broker then receives that result set and checks to see if it has completed all tasks for that transaction. If so, it responds with the results of all tasks (or a defined response using a subset of data from results) to the originator.
All services are idempotent and communication is via RabbitMQ to support fabric changes and persistence/guarantees of delivery.
Services themselves are RESTful HTTP API for manipulation of the specific nouns they are in charge of. It's allowed us to separate concerns to a surprising degree and formalize business logic in a single area (the broker) for event-driven behaviors. It also gives us the flexibility to interface with third party services in a manner that was impossible given the previous disastrous code.
"The beshitted legacy system" is a more eloquent turn of phrase for it than anything I've yet conjured up, so I salute you.
The most amusing part of that last bit was presenting concurrency tests that showed any concurrency > 1 insured corruption (because the C-level who chose the original contractor/employee refused to believe the actual logical argument of its faults) and still having to deal with arguments about how "it hasn't happened yet!" "Your house has never burnt down, yet you own insurance. And as an aside, I will note that it has happened on several occasions. You've just never had to actually solve it and your prior monkey kept you in the dark about it."
There is probably a lot of legacy like this out there...
The first two years were essentially triage efforts to at least introduce some modicum of dependability and scalability while trying to avoid wholesale rewrites. Eventually some C-level management changes along with a couple of years of my insistence that delusions to the contrary we did not in fact have any in-house design talent, the company decided to finally address the woeful usability of their product and hired a usability/design firm to create a new front-end. Given that most of the original code entwined presentation, models, views, and sewage plumbing completely this was the opportunity to re-architect with the caveat that we wanted to limit the scope of that effort to purely customer-facing areas, because the homebrewed in-house "CRM" was at least as craptastic as the customer side of things but was a much larger codebase and dealing with a ground-up change was a recipe for disaster.
So in effect we have two systems running in parallel, with rational database design and separation of concerns on the customer side along with duplication of data into the older database to minimize impact on the internal stuff. The efforts for rolling replacement of the legacy system will then continue, removing a noun and associated verbs one at a time. Basically performing the old Ship of Theseus trick, except at the end a wooden row boat will end up a powered steel-hulled yacht.
Not sure how you find "people who have terrible codebases and don't know what to do" audience d but you have the writing skills for it
I've always viewed the blog->consulting path as sort of an underwear gnomes problem.
Step 1: Blog about X
Step 2: ???
Step 3: PROFIT!(consult!)
The time and knowledge necessary to successfully build a web audience are non-trivial when you lack a pre-existing public persona or a measurable marketing budget.
If someone knows a non-flippant difference between SOA and microservices, i'd be interested to hear it.
In practice, most "SOA" architectures are microservices.
In a 'traditional' SOA architecture, the workflow is defined as a linear progression from one state to another, where each 'service' mutates ( or not ) the state of the data.
Determining 'which' services are used/called in the service bus is typically defined in an orchestration.
They definitely have some similarities, however one thing to note about 'Architecture' is that it's often more about ways of thinking about a problem as much as it is about 'solving a specific problem'.
Another helpful comparison might be:
SOA is like GNU Hurd and Micro-Services are like Unix.
A customer just asked to two-factor authN. I can make that change without affecting any of the business services. Getting it wrong means little more than a short-lived denial of service.
In traditional SOA authN would be wrapped up with authR in a service interface or façade, fronting the monolithic app. Not nearly as extensible or flexible.
Netflix is known for opensourcing a whole suite of infrastructure components which allow to run a microservices architecture on AWS (https://github.com/Netflix). These include various configuration management tools, event buses, monitoring solutions and much more.
To my mind, the most important benefits of a microservice architecture are a clear separation between teams working on different services and the ability to upgrade said services autonomously without stopping/breaking the consumers which allows for fast iteration. The important point is you don't just replace services, but introduce new versions and automatically retire old versions when all of the consumers upgrade to the new ones.
HTTP is great and all but it's not lightweight. I'm curious what happens when each incoming HTTP request from a client cascades into 5-10 HTTP requests inside your "Microservices" ecosystem. Does that scale? Granted, these may all take place on the same box but that still seems wasteful. Then there's the challenges of making sure each piece of the architecture is working correctly - and if it isn't, are you safely handling the error/routing to a known working node, etc?
Sounds quite difficult to cope with in practice.
One example case: you write a monolithic Rails app that is structured around services but all services are in memory service objects. In production you find that your search service is both doing far more work than your other services & getting queried more often. So you refactor your application so that a call to the search service is no longer an in memory function call but instead an RPC to a group of servers that only handle the search service and have been rewritten in Java/Go/C++ to be blazing fast. Since you wrote your app with services in mind, you probably don't change the API at all, it just becomes a wrapper for an RPC rather than a class in your monolithic app.
This way you don't automatically bulk up on unnecessary, expensive HTTP requests but you maintain the flexibility of optimizing modules of your app for performance.
The complication arising from splitting into microservices is that tracing and profiling calls become more difficult. So you have to start using new tools like Zipkin.
A well designed class interface can be turned into an HTTP (or socket) API easily enough. By the time you need to, you'll probably have the resources to monitor, scale and maintain the additional service properly.
And that's the first thing that I thought.
If the benefit of a microservice is that you can distribute it across multiple machines then you really just have SOA.
I've tried to use it several times but I've run up against limitations built into the design that make it infuriating to use. Mostly this is due to how ZeroMQ goes out of its way to hide the networking details and refuses to expose them even if you need to know.
Sounds like some time an research on why we call something a component and then cannot replace it at runtime. I understand erlang has some features in this area.
Edit: By machines I meant the physical world we borrow our vocabulary for abstractions from, not servers. You can't change the tires on a car while driving it, or the belts on a vacuum while cleaning.
For some servers and mainframes you most certainly can replace components on the fly. It is a requirement for some places.
weird my other comment dissapeared anf you have dups.
I think it has a good use-case if you have one resource-intensive part of your application that has different scaling or completion time parameters than the rest of the application.
For large complex systems it's also a form of code-reuse across several different systems.
Perhaps it was ahead of its time, or maybe the idea just wasn't very good. I really don't know, but it's cool to see the same ideas popping up again.
It is a fundamental truth that software adopts the structure of the teams that create it. At whatever level you inspect it, microservices/SOA mirror most team structures more closely than any other model.
This makes the architecture easier and more natural for teams of human beings to handle.
Tools like ansible allow you to do really cool things with microservices
Interesting approach. I'd like to see examples of microservices applied to other domains.