OpenRPC and CORBA are out of fashion. Google protocol buffers help, but don't come with a standard RPC mechanism. HTTP with JSON is easy to do but has high overhead. Then there's the handling of failure, the curse of callback-oriented systems. ("He said he'd call back. Why didn't he call back? Doesn't he like me any more? Should I dump him?")
The lack of a good, standardized interprocess call mechanism results in interprocess call logic appearing at the application level. At that level, it's intertwined with business logic and often botched.
Could you please elaborate on Windows part of your statement? What particularly are you referring to?
Other than the performance overhead (that I initially chalked up to 'oh, COM is just slow I guess'), the way I finally noticed was when a pointer I passed across the COM boundary wasn't valid (because it was to memory in another process). Whoops! Everything was working perfectly up until that point.
(FWIW, I am pretty sure the invalid pointer thing only happened because I was passing a raw address around - VB6 doesn't have pointer types.)
Plus, since COM provides ways to do source-level and binary compatibility, you can leverage that for your RPC.
I won't call it awesome, but it's quite robust and gets used in many places on Windows.
At a more basic level, you can trivially implement RPC using windows messages, though the security model improvements in 7 and 8 have made this a bit more complicated. There is excellent, straightforward infrastructure for establishing message loops and sending messages - really easy to get right - and it doesn't require your application to have UI. Pretty much any thread can receive and process messages if it wants to.
You accidentally summarized my experience with development on Windows in the nineties :-)
It's funny that what was designed as a feature of the ecosystem (tooling that abstracts complexity), completely turned me away from Windows development and into Linux development. Development on linux was, at the time, much cruder and closer to the metal. The positive note was that everyone was on the same boat, so documentation of low level routines, as well as community support, were flawless.
(not that I can, to this day, understand documentation created by kernel hackers, such as the documentation of nftables, tc or the ifb module, but I digress)
Sounds like you want Thrift.
it’s not an OS primitive (yet: kdbus), but why does it need to?
Not only that, but there should be only one connection to database, through all traffic will go. Pulling that up through a network switch connected to a server with multiple cables, or pushing all that through a single TCP connection, with a foot note that advises against rewriting it after a few years?
Also, everything explained there should run in a single thread, because surely it will be fast enough.
Good luck with that.
[if you find other funny stuff that I missed, leave them in the comment section below]
Yes and no. Sometimes there is a dependency that is used only on 1% of requests, or even 0.1% of requests. And that's thing thats hard to track by 50x error codes, because they are really rare.
But anyway it's not "fully designed feature", just a vision of what potentially can be done. So it's not going to be a stumbling-block for implementation.
> Also from the Article we can learn that sending a 304 with content does not work as expected (!?)
Sending 304 works as expected. Passing arbitrary status code with arbitrary headers with arbitrary body from the application to a framework doesn't work as expected.
> Not only that, but there should be only one connection to database,
> through all traffic will go. Pulling that up through a network switch
> connected to a server with multiple cables, or pushing all that through
> a single TCP connection, with a foot note that advises against rewriting
> it after a few years?
Not sure I understand your question well. But note that I'm speaking about Python. We have many python processes on the box anyway. Each of them has it's own connections to the downstreams (i.e. a database).
Then if we start writing asynchronous Python code, we need to send requests from multiple asynchronous tasks from each of the python process. I argue that it's more efficient to send requests from all tasks of a single process through a single downstream connection.
> Also, everything explained there should run in a single thread,
> because surely it will be fast enough.
Sure, single I/O thread in C will outperform any python processing of that data. That's true for 99.9% use cases.
I don't know about Node, but I know it's perfectly possible to write "proper" multi threaded programs in Perl (and has always been since the version string started reporting support for threads, which should be a good 15 years ago).
It's not terribly relevant to the article (because you normally don't write multi threaded programs in Python), but then why bring it up?
(I'm in the process of build a toy language).
Is this more a problem for a language with baggage, or is general? I wonder if for example in Go, Erlang is less of a issue...
0 - I do hear Go-lang users talk about microservices, which is weird. Because they have decent concurrency primitives to where they shouldn't need to split web applications up into microservices.
The architecture of an idiomatic Erlang-based system is essentially a microservice architecture.
> - I do hear Go-lang users talk about microservices, which is weird. Because they have decent concurrency primitives to where they shouldn't need to split web applications up into microservices.
Microservice architecture has motivation (loose coupling, distribution, independent scalability of components) that go considerably beyond "my language doesn't have decent concurrency primitives".
> loose coupling, distribution, independent scalability of components
If you have good abstractions for concurrency then distribution and independent scalability should be trivial in a single codebase. Loose coupling is usually a false dream. Microservices tend to get coupled at the network level instead of the code level. Yuck!
No matter how good your concurrency primitives, you cannot escape the network. Yes, I know, Erlang is awesome and has excellent distribution and concurrency capabilities out of the box, but you cannot write all the things in Erlang, nor should you want to. Different languages/services have to talk to each other somehow and at some point.
It can be much harder. Let's say that you have a zipcode<->address conversion library and you're deciding whether to use it within the web stack (option A) or to create an additional service with a REST API on a separate machine (option B). Microservices!
Option A means that it scales with your web stack. If you have 5 application servers today behind a load balancer and your load doubles then tomorrow you will need 10.
Option B means an entirely new set of servers. Not only do you still need those 5 application servers, you need an entirely new server for handling zipcode<->address translation. Let's say it's maxed out.
What happens when your load doubles then?
Well, you'll still need to scale up those 5 application servers running behind a load balancer, but you ALSO need an entirely new load balancer and two servers behind it for the zipcode<->address translation service.
>Different languages/services have to talk to each other somehow and at some point.
This doesn't mean that you need for them to talk over a network layer.
I'm not saying I agree, but I do understand why some golang programmers might be evangelizing microservices as well
You do get an opportunity to monitor each service easier if they live on separate machines (or just separate processes). If you want that with libraries you need to add monitoring code to all libraries and then collect data from them to see where the bottlenecks are. You might also need to develop new tools to configure things like pool-sizes and cache-sizes for each library. For example saying that you should have 5 threads (or handlers) handling mail-messaging and 20 threads/handlers handling calculations. You might also want to be able to change theese configurations in a live system. I think a lot of applications misses this monitoring and configuration part and then gets into problem because of that.
And of course microservices helps with using different languages for different parts, even though most organizations I've seen seem to mandate a specific language anyway.
(I'm a big fan of SOAs/microservices for companies that operate at Google-scale. Most companies do not, and for the rest of us - plain old libraries are very underrated. You can always slap an RPC layer or message broker on top of a library's API, but there's no reason to pay that cost until you need to. The real reason to break things up into microservices is when you run out of RAM on the box, or alternatively when you want better cache hit rates by focusing the processor on a small amount of code. You typically don't get there until you're serving thousands of QPS against a data set in the terabytes.)
Constraining complexity: It actually makes complexity worse (what do you do when microservice A goes down? times out? returns data you can't deserialize? takes too long?). These things you do not have to worry about if you are not doing any inter process or inter-server communication.
Code reuse: No. Why would it help code reuse?
1. Support full threading and one of the memory models that allows it. To have an inspiration look at Clojure, Rust, Go, Erlang (probably I don't mean any particular order).
2. Have a standard communication way between processes, like Erlang has. My favourite would be to implement SP protocol family developed as a part of nanomsg library (https://github.com/nanomsg/nanomsg/tree/master/rfc).
But the real issue behind all of that is that we lack means to easily implement protocol stacks. Implementing a new protocol (especially in the user space) is a task that can easily eat months or years of your precious time.
There are protocols that take months and years, but they are not so ubiquitous (with the obvious exception of HTTP, which is really complex and ubiquitous). So they may be developed after basic tools are in place.
I think that after a few years, software businesses will realize that it was an investment with questionable advantages and go right back to what was working fine for the past decade and will continue to work fine.
It seems that Amazon started using microservices in 2002. Do you think 12 years is not enough to learn on mistakes?