

Ruby on Rails and the importance of being stupid - lackbeard
http://blogs.law.harvard.edu/philg/2009/05/18/ruby-on-rails-and-the-importance-of-being-stupid/

======
dasil003
Judging by some of the comments here, it seems people are giving Greenspun a
free pass because he's apparently getting at deeper point. However when I read
this article, it is chock full of straw men. The comparison between a
competent Microsoft programmer vs a complete bumbling fool labeled as an MIT
Genius is at best intellectually dishonest. I wrote a lengthy response which
I'll post here in case his moderator decides he doesn't like it:

I read the moderation policy where it's suggested that reviews of the post are
not valued. However I feel an obligation to point out the factual errors in
this post. There are dozens of nonsensical assertions and could potentially be
very misleading to anyone who doesn't understand Rails or web development in
general.

My first general critique is that there is no real comparison going on here.
It says the business guy called up Microsoft and they recommended buying a
bunch of hardware, but there's no discussion of who developed the site or how
they got up and running. There's no discussion of the price of the hardware,
which clearly looks to be well into the 5-figures, or the price of the fiber
connection at home, system administration, backups, etc. To get into some
specifics:

 _The programmer, being way smarter than the swaptree idiot, decided to use
Ruby on Rails, the latest and greatest Web development tool. As only a fool
would use obsolete systems such as SQL Server or Oracle, our brilliant
programmer chose MySQL._

This is a caricature of an "MIT Genius" that doesn't jive with reality. Anyone
who was actually that smart would know better than to dismiss Oracle in favor
of MySQL. They may prefer using Ruby on Rails and be more productive than if
they used .NET, but they wouldn't go around calling people idiot's for such
superficial reasons. Therefore you're not describing an actual genius, just
someone who thinks they are a genius, but is actually a fool. Using such a
person as the basis for an argument of why Microsoft's recommendations are
better than Rails is intellectually dishonest.

 _How do you get scale and reliability? Start by virtualizing everything. The
database server should be a virtual “slice” of a physical machine, without
direct access to memory or disk, the two resources that dumb old database
administrators thought that a database management system needed._

The reason that virtualization is done in the web deployment world is so that
you can get access to fast and reliable hardware if you need less than the
cost of the full resources. A degenerate example would be that if your
capacity requirements could be met by a 250mhz processor, you would get better
throughput by using 1/8th of a 2Ghz server. The reasoning for this is that the
vast majority of sites don't need dedicated hardware, which you seem to imply
as being cheaper, but clearly it is not if you are leasing server capacity.

 _Ruby and Rails should run in some virtual “slices” too, restricted maybe to
500 MB or 800 MB of RAM. More users? Add some more slices!_

I'm going to assume you are talking about EngineYard here, since that is the
managed Rails hosting provider I am most familiar with and is somewhat inline
with your pricing figures below. First, the 500 or 800 MB is just a base
amount of RAM that is good for most small Rails apps. When that starts to run
out, the solution is NOT to add more slices, you simply commission more RAM.
EY can do this without even restarting your slice. Incidentally you can also
commission more CPU if you need it. The reason they start with two production
slices is for redundancy. One of your slices goes down for some reason? That's
okay because there's a backup.

 _The cost for all of this hosting wizardry at an expert Ruby on Rails shop?
$1100 per month._

What you described above is a very poor description of what you are paying for
at a managed hosting provider like EngineYard. I will describe managed hosting
in a minute. But to compare to your unmanaged Microsoft example, I currently
pay $8/month for a 256MB of unmanaged hosting that is plenty to server
significant traffic on a well optimized app. This is an order of magnitude
less than the Verizon FiOS line _alone_ , and provides much better network
connectivity (ie. multiple tier-1 connections, lower latency to more
endpoints).

With managed hosting at EngineYard, you are not just paying for the server.
You are basically paying for a fulltime system administrator. They have people
all over the world ready to help you at a moment's notice any time of day or
night. They proactively monitor your server and contact you if they notice any
abnormalities. They provide a large suite of finely tuned recipes and standard
software installations that they can install on a moment's notice, and will
tie into their monit-based server monitoring setup. The individual machines in
the cluster are optimized for their specific tasks. The network hardware and
topography is optimized for real world usage scenarios. They continuously tune
the machines for throughput and move clients around to avoid bottlenecks. They
will even take significant steps towards helping the client tune their own
application, above and beyond their contractual obligations for server
adminstration. In short, you've completely ignored 95% of what they do, and
painted it as extremely expensive without even providing a comparison against
the overhead costs of buying and managing your own servers.

 _For the last six months, my friend and his programmer have been trying to
figure out why their site is so slow. It could take literally 5 minutes to
load a user page. Updates to the database were proceeding at one every several
seconds. Was the site heavily loaded? About one user every 10 minutes._

If a request on an unloaded server takes 5 minutes to load, and the programmer
can not figure it out in 6 months, then that programmer is incompetent plain
and simple. Laying this at the feet of Rails is just plain ridiculous.

 _I began emailing the sysadmins of the slices. How big was the MySQL
database? How big were the thumbnail images? It turned out that the database
was about 2.5 GB and the thumbnails and other stuff on disk worked out to 10
GB. The servers were thrashing constantly and every database request went to
disk. I asked “How could this ever have worked?” The database “slice” had only
5 GB of RAM. It was shared with a bunch of other sites, all of which were more
popular than mitgenius.com._

Are you implying that you need enough RAM to keep the entire database in
physical memory? That is patently false. In a worst case scenario, yes it
could take performance down quite a bit, but disc access is not nearly as slow
as implied above. I've served tons of sites on pure shared hosting (not even
virtualized) with much higher load and orders of magnitude better performance
than you are describing here.

 _How could a “slice” with 800 MB of RAM run out of memory and start swapping
when all it was trying to do was run an HTTP server and a scripting language
interpreter? Only a dinosaur would use SQL as a query language. Much better to
pull entire tables into Ruby, the most beautiful computer language ever
designed, and filter down to the desired rows using Ruby and its
“ActiveRecord” facility._

This is nonsense Philip. Please don't take this as an ad-hominem, because
there's no other way to put this. What you described here is 100% pure
nonsense. ActiveRecord, like any ORM component, abstracts away some SQL in
order to simplify common database interactions. The lion's share of
ActiveRecord code is all about constructing efficient SQL. When you are
developing with Rails it shows you all the SQL running the development log,
and you can quickly spot n+1 errors. If you need something more efficient, it
offers plenty of levels of access right down to pure SQL.

 _In reviewing email traffic, I noticed much discussion of “mongrels” being
restarted. I never did figure out what those were for ... What am I missing?
To my inexperienced untrained-in-the-ways-of-Ruby mind, it would seem that
enough RAM to hold the required data is more important than a “mongrel”. Can
it be that simple?_

I'm shocked that a programmer would speculate so wildly as to say something
like this. A mongrel is an application server. I don't understand what you
seem to think it is, but it's simply the process serving up Rails requests to
the web server and passed through to the client. Typically you run more than
one so you can serve multiple requests concurrently, but for a well-optimized
app usually no more than 3 or 4 are necessary. Rails uses a non-threaded
share-nothing architecture which means you can scale horizontally across
unlimited servers. Note that I am not talking about virtualized servers. I'm
talking about when you have more traffic than the biggest server in the world
can handle, Rails will let you scale out painlessly at the web server level
until your database can not be served by a single box. At that point you need
to look at database sharding, or alternative data stores using Map-Reduce or
some other scalable database solution.

None of this is to say Rails doesn't have its warts. Ruby is memory hungry,
leaky, and relatively slow. Deployment has traditionally been very complicated
compared to something like PHP (although it's much improved with Phusion
Passenger aka. mod_rails for Apache/Nginx). There are many reasons why you
would be well-advised not to use Rails, however this article doesn't touch on
any of them. Rails, just like Oracle, .NET, Java or many other technologies is
a proven platform with pros and cons. In this article you pit an apparently
competent programmer developing swaptree.com against what can be described as
nothing less than a complete bumbling idiot using Rails. You insist the cost
of Rails is high without any justification or direct comparison against the
costs of swaptree.com.

I've read your blog in the past and found it to be pretty interesting, which
is why I've taken the time to write this response, and suggest politely that
you retract this article.

~~~
tvon
You are glossing over a lot of sarcasm, I think.

~~~
dasil003
Fair. However it's really hard to isolate the sarcasm because there's no real
factual meat to the article.

~~~
dkarl
My short reaction: Good lord, you know all that and still feel like the
article is targeted at you? Or at Rails as a technology?

Long reaction: You missed the point of the article, which is that keeping on
top of the latest and greatest technologies is almost never necessary, and it
is _never sufficient under any circumstances_. You don't have to know what a
mongrel is. You do have to understand the orders-of-magnitude difference
between different levels in the memory heirarchy. (RAM is much better than
disk -- a simple, stupid fact that people ignore all the time.) There are lots
of people running around with credentials and hot technologies who don't know
what they're doing, and there are lots of young people who worship those guys
and spend their time running after trendy stuff because they haven't yet
figured out the difference between learning technology and deciding what to
wear. (Which might not be as bad as relying on engineering principles to
choose your wardrobe. Hmmm, personal food for thought.)

Sure his article isn't particularly original in intent or execution, but the
need for this article is perennial. You have to keep updating it because it's
aimed at people who only pay attention if you talk about the current latest
and greatest. That's why Rails was the perfect victim -- that's where his
target audience _is_ right now. (And the Microsoft stack is the perfect frumpy
foil to Rails.) Not that the Rails community doesn't contain other kind of
people; it evidently does, or posts like yours wouldn't exist. But it is also
The Trendy Thing and is therefore cursed with attracting the naive my-
favorite-band-is-better-than-yours types who think "follow the buzz" is _the_
successful strategy for _all_ domains of life.

Fast-forward ten years, and I'm sure he'll have written the same article with
the blanks filled in with another hot technology. Which is a good thing.

------
jimboyoungblood
Yawn. Yet another misinformed "Ruby/Rails can't scale" article.

Seems to me the main problem is that his "MIT trained" friend had no
experience building a scalable web service. He would've botched it up in PHP,
Python, Java, whatever. There's nothing about his main mistakes- running your
database on a shared server, naive (ab)use of SQL- that is unique to Rails.

And this bit: "pull entire tables into Ruby, the most beautiful computer
language ever designed, and filter down to the desired rows using Ruby and its
“ActiveRecord” facility" is _completely_ incorrect and makes it obvious Philip
Greenspun knows less than nothing about what he's ranting about.

~~~
mattmaroon
I didn't read "Rails can't scale" at all. Did you even finish it? He
recommended still using Rails at the end.

~~~
chaostheory
well it would be bad advice for the programmer to simply throw away all of his
work at that moment and start from scratch - regardless of what you think of
the technology he used

"Rails can't scale" was implied

~~~
mattmaroon
I don't think it was implied at all, especially since he recommended using it
at the end. It's implied only if you read the title and nothing more.

What was both implied and directly stated was that a cloud-based architecture
is often not the best idea for a lot of people, despite the modern mania for
it.

~~~
plinkplonk
Phil specifically addresses the idea that he is dissing RoR in a
comment.(emphasis mine)

"Angry Rails Enthusiasts Whose Comments I Deleted: A lot of the comments were
of the form “Your assertion that it is impossible to build a responsive Web
site with Ruby on Rails is wrong. Rails is in fact great if programmed by a
great mind like my own.”

The problem with this kind of comment is that _I never asserted that Ruby on
Rails could not be used effectively by some programmers._

 _The point of the story was to show that the MIT-trained programmer with 20
years experience and an enthusiasm for the latest and greatest ended up
building something that underperformed something put together by people
without official CS training who apparently invested zero time in exploring
optimal tools._

Could some team of Rails experts have done a better job with mitgenius.com?
Obviously they could have! But in the 2+ years that our MIT graduate worked on
this site, he apparently did not converge on an acceptable solution.

My enthusiasm for this story has nothing to do with bashing Ruby or Rails. I
like this story because (1) it shows the fallacy of credentialism; a undergrad
degree in CS is proof of nothing except that someone sat in a chair for four
years (see [http://blogs.law.harvard.edu/philg/2007/08/23/improving-
unde...](http://blogs.law.harvard.edu/philg/2007/08/23/improving-
undergraduate-computer-science-education/) for my thoughts on how we could
change the situation), (2) it shows what happens when a programmer thinks that
he is so smart he doesn’t need to draft design documents and have them
reviewed by others before proceeding (presumably another set of eyes would
have noticed the mismatch between data set size and RAM), (3) it shows that
fancy new tools cannot substitute for skimping on 200-year-old engineering
practices and 40-year-old database programming practices, and (4) it shows the
continued unwillingness of experienced procedural language programmers to
learn SQL and a modicum of RDBMS design and administration, despite the fact
that the RDBMS has been at the heart of many of society’s most important IT
systems for at least two decades."

That is exactly what I understood from the article.

I don't see any rails bashing in the original article and you would have to
cherry pick phrases to get that idea. I read the HN comments first and thought
phil had gone off on a rant against RoR t judge from some comments here.

that will teach me to read HN comments before reading the original article!

~~~
dasil003
It doesn't show any of that because it's all made up. He just slapped together
a story that would appeal to someone like you based on your preconceptions,
but there is no actual argument. The whole thing could be reduced to "idiots
can't write software" and it would lose no substance.

Even some of these points that are supposed common sense engineering wisdom
are specious. Do you need to draft design documents to build a workable
product? Of course not! Is the first thing you should do when you start a new
website to buy $20k worth of hardware? No! Do you need enough RAM to hold your
entire database? Maybe it's the best optimization you can do, but it's far
from a foregone conclusion.

Why I am I so vitriolic? Because the article is not truthy. The quote above
says "MIT-trained" in the same sentence as "without official CS training." Uh,
it doesn't get much more official than MIT. Suggesting that a programmer with
20 years experience couldn't get a single web page to load faster than 5
minutes is a flight of fancy plain and simple.

I might as well write a long-winded story about how Microsoft hired a
Chimpanzee to program the next version of Word, and failed, therefore mammals
make terrible programmers and furthermore decided to use C based on ill-
informed simian whimsy.

------
imownbey
"Bad programmer writes bad code"

This is hardly a new idea. The fact that a bad programmer used Ruby on Rails,
or Django, or PHP, or C++ to write bad code and implement it on a shoddy
system is no reflection on anything. This is essentially a story of someone
who took good advice for a hosting environment, and someone who took bad
advice for a hosting environment.

Learn your tool, don't buy into the hype. Make sure you are aware of the
reason behind everything that you do (because they I read it on a blog is not
a reason). Don't be a bad programmer.

~~~
DougBTX
Simply knowing particular languages has been pointed out on sites not too far
away from here as a sign of a Good Programmer. This counts as evidence to the
contrary. But not absolute evedence, I guess we all still have to think for
ourselves.

~~~
Periodic
I could understand an argument that knowledge of certain languages is
correlated with being a Good Programmer.

This guy seems to have heard about the cool tools of the day, jumped in, and
failed mostly due to a lack of experience.

One great lesson of experience is that there is often a simple solution to
complex problems.

------
ars
"Configure the system with no swap file so that it will use all of its spare
RAM as file system cache."

That will do EXACTLY the opposite of what he wants. Give it a swap file and
unneeded parts of memory will be swapped out to disk, freeing memory for use
as a file system cache.

There is _never_ a reason to configure a system without a swap file - except
if it's a laptop and you don't want the disk to spin up.

Don't want the system to use swap space? Don't allocate more memory than you
have (or buy more memory), but disabling the swapfile never helps. In some
cases disabling it doesn't hurt anything, but it never helps.

~~~
cma
Why should it ever be acceptable for someone to sit down in the morning to a
machine with 4GB+ RAM and have things like a volume OSD take tens of painful
seconds to swap in because the night before the machine ran 'updatedb' and the
system decided to swap out a few bits of "unused program memory" with a
useless cache of the entire disk index?

~~~
jimboyoungblood
Since you know its useless, why not just turn off the updatedb cron job?

~~~
sokoloff
Because updatedb isn't inherently useless, but rather that file system caching
(especially for pages read once) shouldn't eject application pages from RAM to
disk.

Being able to run locate and have it quickly return an accurate result _is_
useful at times. I just don't want it paging my entire session out to disk
every night in order to do that.

~~~
ars
"I just don't want it paging my entire session out to disk every night in
order to do that."

It doesn't.

------
lr
This is a slam on virtualization, not Ruby/Rails. Bad title, but the overall
point remains: When you buy a "slice" of something, you have no idea what you
are really buying. If you buy a piece of hardware with 32GB of RAM, then
that's what you get. And if you know what you are doing, it is going to be
much cheaper than buying "slices." In other words, the whole pizza is always
cheaper than if you were to pay for 8 separate slices.

~~~
jmtulloss
Most virtualized infrastructure providers give you some guarantee of how much
RAM, CPU, and diskspace you will have available. You may be able to exceed the
limits from time to time, but the minimums should be clear when you sign up.

~~~
Andys
I haven't seen any that guarantee disk access latency.

~~~
Xichekolas
Probably because it would be impossible without having a dedicated physical
disk for each slice on the machine, or some kind of really fancy network
storage array.

Maybe that will change once SSDs become the default hardware. Without the
bottleneck of seek time, latency should be much easier to quantify. Also,
since you wouldn't be penalized for "context switching" (I know this term
doesn't really apply to disk IO, but I mean switching disk jobs often, which
requires a head move on HDs) you could maybe someday slice up the SSD time
like a CPU and guarantee it directly. (For instance, if the SSD is capable of
200mbps, you're slice could be guaranteed 10mbps. Or something more
technically realistic, I am but an amateur.)

------
lsc
Ram is King for most workloads; and he's right, it's ridiculously cheap. If
you need more than 16GiB ram, it makes a lot of sense to buy your own hardware
and co-locate it.

Now, I disagree about keeping it in your basement, at least once you have
users on it. co-locating a 2 cpu box is going to cost you around $100/month,
and you get much better connectivity than DSL at that price. DSL isn't much
cheaper, if you get a reasonable uplink speed (at least around here) and I
don't know about you, but the power in my house (I live in California) isn't
exactly /enterprise grade/.

But yeah, I see a largely untapped market for renting high-ram otherwise-cheap
servers; I've rented out one 32GiB server with a bunch of drives (and some
slow CPUs) to a guy for $1200 setup and $175/month... once my next load of
servers is up and built, I'm thinking about chasing that business model again,
if I can build the servers faster than I get new vps signups, anyhow.

------
mtarnovan
The problem is not that you need better hardware for this scenario, you just
need a better programmer.

"Not helping matters was the fact that the sysadmins found some public pages
that went into MySQL 1500 times with 1500 separate queries (instead of one
query returning 1500 rows)."

Looks like someone forgot to use :include on some finders. Let's say you have
1000 users with an address each. This will produce 1000 SQL queries:

    
    
      User.all.each do |user|
        p user.address.street_name
      end
    

This, however, will only issue two queries:

    
    
      User.all(:include =>:address).each do |user|
        p user.address.street_name
      end

~~~
sunkencity
Well, if you know your way around RoR, the first thing you do after most
development is done, is to load the query_reviewer plugin to get an automated
profiling on each page, and take steps to limit the number of queries. Either
by memcached, or by adding indexes. It's not unusual to go from a couple of
hundred queries per page to less than 10 per page in a day of optimizing.

------
bittersweet
Interesting read. I work with Ruby on Rails myself but I'm not that
knowledgeable about scaling it.

It seems to me they didn't really know what setup they were running if they
are wondering what a 'mongrel' is.

I hope they weren't trying to serve the site on only a couple of mongrels.

My first thoughts,

Benchmark a bit, use a tool like fiveruns to find out what's really happening.
I wonder what the real bottleneck is.

Of course they shouldn't use a shared database server, but I'm wondering if
they are they using caching? What I'm reading about that site I think they
could cache the hell out of it. I've only used the builtin Rails cache methods
but a tool like memcached should help out on all the database requests.

~~~
jcapote
Thankfully there are numerous parties out there that will handle the scaling
issues for you (heroku.com, engineyard.com) so you can concentrate on what's
important, the app itself.

~~~
jmillikin
If the application is performing idiotic operations such as "[querying] MySQL
1500 times with 1500 separate queries (instead of one query returning 1500
rows).", merely moving to a better provider or using more powerful servers
won't help. A modern web-based application is very complex -- the programmer
_has_ to understand how the database and application communicate, how HTTP
works, how to cache data, etc.

~~~
pelle
As a RoR developer I have often been hired into optimize various services. One
of the biggest issues I have seen is this exponential growth of queries.

I have routinely seen requests at clients with 2-3000 queries (even with query
caching). Most of them are small, but at that amount it doesn't matter how
small and efficient each query is.

I love AR to death as an OR library, but it is extremely easy to get into
these kinds of issues when you iterate over a large dataset and then call
associations of associations without thinking too much about it. I don't think
it is only an AR issue, conceptually I think it is true for all ORMs.

They can be a PITA to unravel and are often very hard to do in a clean AR like
way in more complex data models. Normally you end up doing some fairly un AR
like pre loading like you would do in a pre ORM app, which while ugly works.

~~~
mikeryan
Agreed, I do all my dev with query trace in order to make sure I'm not looping
back into individual queries.

to those that don't know, Rails doesn't load the children of AR objects by
default, so if you do something like

Select * from books

and then iterate through the books and get books.author_name where the author
data is a relational table, you're going to get a separate query for each row.

~~~
ericb
Usually the point where you could get into trouble would look more like:
book.author.name in the given example.

------
blader
Ruby on Rails and the importance of being COMPETENT.

------
tjogin
Shouldn't the lesson to take away from this rather be that it's important to
_not_ be stupid? Clearly, the "MIT trained" programmer don't know what what
he's doing, at all.

I'd expect similarly disastrous results had he used the same 100% ignorant
approach to any other language or platform.

------
rbranson
Umm, it sounds like some tweaking of your ActiveRecord::Base.find calls with
the ":include" parameter might improve performance by 10,000x as well.

------
chaostheory
this is a little funny. This isn't the first time Greenspun has been
evangelizing the benefits of MS ASP (and now ASP.NET) vs x technology (back
then he was deriding Java). The structure of the post is the same: some
sensationalism with some missing context (since he doesn't usually do his
homework on what he is criticizing).

I guess some things don't change (much)

~~~
patrickg-zill
I think he is using the MSFT technology as the worst case or "caveman"
approach and then contrasting it with situations where the person outsmarted
themselves.

------
frodwith
Classic case of overengineering the problem. I'd venture that it's a common
enough class of mistake for those who have recently learned the hip/new/cool
way of doing a thing. Thing = scaling a webservice in this example.

------
luckyland
seems to be about ruby and rails by coincidence.

------
va_coder
This post spoke poorly about two concepts: * query optimization * robust
hosting Both of which are generic and have nothing to do with Ruby on Rails in
particular.

------
andybak
I'd been puzzling why the name was so familiar and it's because this man is
the source of the ultimate Smug Lisp Weenie quote...

Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C or
Fortran program contains an ad-hoc, informally-specified bug-ridden slow
implementation of half of Common Lisp."

------
jhawk28
ORM is rarely effective unless it is just used for CRUD. Most OOP programmers
using ORM rarely use it efficiently.

~~~
mrinterweb
ORMs are extremely useful. Many extensions/plugins/gems/whatever can be
developed for an ORM. With ORMs not being tied to one database technology
extensions to the ORMs can reach a wide audience. Look at Rail's ActiveRecord.
There are many extensions based on that ORM that tie into Rails allowing for
developers to not have to recreate the wheel over and over. When using
ActiveRecord I have seen how a simple over-site can lead to inefficient
database use, but it makes that inefficiency pretty obvious and can usually be
easily corrected. ORMs can also take care of query caching and other
optimizations for you. If you pay attention to what your ORM is doing for you,
an ORM can be efficient and a giant time saver. BTW, what do OOP programmers
have to do with things anyway? Do functional or procedural programmers use ORM
more efficiently?

------
jdavid
just built a service in PHP that can process 15,000 rows of data, with a
complexity of about O(1000*(10|100)n) and it runs in 4.5min. I was looking to
beat 15m. I expect to get that down further.

right now i use a LIMIT of 100 to minimize sql queries, and memory. i might
try going to 1000 rows or about 10k in data, but in my experience when the DB
has to return that much data you are not really gaining that much.

what do you think? any other optimizations i should look to?

~~~
cschneid
One optimization is to write big O notation in minimized form. It looks more
impressive to write O(n).

And to try being helpful: do aggregation in the db where possible, be sure
indexes are good, play around with the number of rows you process at a time,
use joins if appropriate to minimize queries, if your queries are expensive at
all use EXPLAIN, depending on the database, look into using a cursor (maybe?).

~~~
jdavid
Cursor?

I was using LIMIT and OFFSET, did you mean something else?

update: I processes 200k-300k rows in a few hours, and since the DB is on the
same box, as the php app, there is no need to make this more complicated.

------
joshu
Troll bridge. Pay troll.

~~~
jimboyoungblood
Trolls are usually anonymous.

------
rue
Hm. It seems the linked article is somewhat over-engineered for general
consumption.

------
c00p3r
Choice appropriate tools for your task, not the task for available tools. MS
fans, Delphi fans, Java fans and now RoR fans are thinking second way - they
love their tools.

The altrernative approach is about using mix of technologies and tools to
complete actual tasks. Today's linux distributions, which are actually mix (if
not mess) of applications and tools written on every possible scripting
language is a good example.

In the area of web-development the same approach works the same way - you can
build different [sub]services and [sub]systems with different tools which are
more appropriate for some particular task. REST-JSON-Key-Value-Storage here,
classic SQL back-end there and so on.

