Hacker News new | comments | show | ask | jobs | submit login

mediocre systems are encapsulated; great systems are codesigned.



On the other hands, mediocre systems often times win in the market due to their greater predictability. Working with isolated system is iterative, if inefficient, but working with integrated system requires deep insight which is not always available.


Co-design is unrelated to encapsulation.


There's encapsulation as an engineering practice and then encapsulation as a religion.

This balls-to-the-walls approach to SOA is one of the shortest and straightest roads to software development hell. I've seen appaling amounts of money disappear in failing projects this way. (Ever wonder what happened at Colours? I used to, until I found out...)

In this wonderland of things being decoupled you find pretty quickly that a lot of functionality gets duplicated in multiple layers, and whenever that functionality is at all subtle, you'll find that every implementation is wrong with different bugs.

A while back I worked with some guy who was into REST and HATEOS and PhD thesises about as valid as Carlos Castenada's and the result was his team spent a year and a half building an app that took 20,000 REST calls and 40 minutes to start. In a week of whole-system thinking I was able to get that down to one POX call and 20 seconds.

He never forgave me.

If you're in one of those organizations that uses scrum as a substitute for project management, failure is even more assured with SOA. Because then, when a whole-system problem needs to be solved, everything has to be synchronized against the phony deadlines imposed by the process so something that could be done in 6 days ends up taking 6 weeks.

SOA gives people the illusion they're managing complexity so now they can throw a 20 person team at a job a 4 person team could do. The 20 person team might get the job done 10% more quickly than the 4 person team, but it's five times as productive at creating bugs. The 4 person team is more "agile" because it spends resources on satisfying business needs rather than creating levels of encapsulation just to have encaspulation, then discovering at the last minute they put walls in the wrong places and need to take out the Sawzall (if they don't hand out the pink slips)

The ultimate way to fight complexity is to fight it directly, that is, don't create it. Every artifact you create is like a puppy you'll need to take care of. Every address space you cross is another thing that can screw up, so don't do it because "it's the thing to do", do it because there is a real gain that overcomes the very real cost.


This is an astute observation, and I think it could benefit from more examples:

On abstraction layers: ZFS was able to achieve something very novel by cutting across all abstraction layers which accumulated in storage management over the decades. I think there is still a lot of work to be done in this direction. I also think the progress here has to go through the cycles - first we pile on abstraction layers in our struggle to wrap our minds around the problem area, then once the problem area is understood we cut through abstract on layers and create an integrated design (co-design in your terms), and then the cycle starts a new. Breaking up the problem into abstraction layers is akin to a child learning to write - at first he has to do it letter by letter, but as proficiency is gained, the letters blend into words, and words into sentences. Beyond spelling, learning to compose a good text follows a similar pattern.

On separations of concerns between client and server: resilience of data is given by the article author as a server concern, and I think that's one great example where separation is actually harmful. I have recently designed a system where a server and a client (a mobile device) would cooperate in preserving data, achieving much greater resiliency than a server alone could achieve without expensive investments on the server side. In other words, separated design is more expensive.


Server cannot rely on client for storing its own domain's state - while client can store user-related state for improving performance, usability.

For example, twitter cannot rely on a client to store the tweets while the client could cache the recent tweets for improving user experience.

Again, for an app playing music, keeping bookmark of the last song played can be fully stored on the client. Yet if it is needed to sync this bookmark across multiple devices (similar to kindle) it becomes a server concern.


Server most certainly can rely on client to store a redundant copy of the data, as there are ways to ensure the data authenticity. Should the server fail and get restored from an hour-old backup, the last hour worth of changes can be fetched from the client, after data integrity verification. Thus the server could get away with hourly backup instead of every-five-second, or full-blown database mirroring.


Great response, thanks. Incidentally, this is a trap I find myself falling into over and over. Too much decoupling, too many moving parts. It is hard to find that line between good design and over-engineering.


Please explain as the sentence makes no sense. The two things have nothing to do with each other.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: