Hacker News new | past | comments | ask | show | jobs | submit login

In the old days, Microsoft encouraged people to do exactly that with their COM based applications, where a software component could be transparently migrated to another machine via DCOM. The original application needn't care where the component was now running since it's location in the universe was controlled by the registry. You could now scale your big fat client app out to many servers. What was once slow and bloated on the client machine could suddenly become fast and nimble on the 'high powered' server(s).

Needless to say this caused, firstly much excitement among VB devs, and selling of Windows NT 4 Server licenses, soon followed by consternation and disappointment.

Turns out: - software design of components running locally, and components running remotely should be approached in veeery different ways. e.g. making 100 property assignments on a component running in your local process space is just fine, but doing that on an object that exists on a server 100 miles of copper away has very different performance characteristic. Apps were suddenly unresponsive in many different and frustrating ways.

- massively increasing the network load as storms of calls and responses to the servers just to display a form, and big round-trips of data that never existed before strained existing infrastructure. E.g. previously the db would fill the GUIs data grid directly with a query. Now... the gui would call the server component, which would get the data, then it would serialize that data into a different form (for ADO) and send it via DCOM to the GUI which would show 10,000 rows. All very slowly.

- massive increase in load on the domain controller, as GUI apps happily created and destroyed objects that used to be local, now they resulted in a DCOM call to create the object which required security checks that were never needed before. Saw this bring domain controllers to their knees.

- As you said, applications could suddenly fail in new and exciting ways since they're now on the network and that entailed a whole new set of failures to be aware of.

- massively increased complexity in installation and configuration, since every little service had to be configured and installed in its own unique way. Security also became harder since permissions per-service (e.g. db permission, filesystem permissions) also had to be managed.

Microsoft tried to help with MTS (their transaction service to host DCOM components), but by that time (early 2000's) people were already exploring web applications as the replacement for fat client apps and migrating away from this approach.

>I always dreamed of something like this where functions could be called as normal but they could be an RPC behind the scenes. >The compiler would take care of serialization/deserialization and routing. You can still do it today if you want, just investigate COM+ if you're on Windows. It's not the compiler that decides whether RPC is needed, but the COM runtime (via the Registry). It's all still there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: