Secondly, something I've noticed about defensive programming is that generally it creates a culture of "meta rules" that only exist in the system architect's head. When someone else reuses one of their models, then the danger is they don't know all these different rules because they are scattered around different controllers or scripts. This has been a particularly vicious source of bugs in our legacy systems. Domain experience becomes critical.
Lastly, defensive programming by its nature is also traditionally a big red warning sign that says "there is no data integrity strategy in place for this system." It doesn't matter if that data is coming from a CRUD form, a relational database or a file... there should be some validation layer that ensures the data is 'correct' before making it 'safe' to reuse across the rest of the system.
I've found that defensive programming is particularly rife in systems powered by MySQL databases. MySQL has traditionally lacked the data integrity mechanisms of Oracle or other Enterprisey systems and thus has created a whole bunch of MySQL 'experts' that actually don't know anything about data integrity. A system I'm working with this very minute has a function where the first 20 lines of code (out of 28 total) is compensating for potential orphan entries in the database. The reason? The admin tool for removing entries does not do cascading deletes. Rather than fix the root of the problem, we have this 20 line check everywhere that we need to access the data safely.
I could go on and on, but in short: defensive programming is one of the major 'code smells.'
For example, if I call a Draw method on an object, I assert if the object Color property hasn't been set. I don't create a default color for it. This makes sure that the Draw will work as it should, it will make no assumptions on how it is being called it just makes sure that when it was called it will have to be valid.
I find the former really useful in quickly finding bugs in my code and documenting the function input assumptions. In general, I'd rather my code fail early and visibly (e.g. crash in debugging environment, return error and log the problem in production environment) than produce garbage because the input was garbage.
I agree the latter bad for defending against inconsistencies in internal data model or as a "precaution" when using API you don't trust, but it's still vital if you're dealing with data from outside world, when you do want to try and behave sanely in possibly damaged input (e.g. not ignore the entire RSS feed if an item from it doesn't have pubDate set).
Defensive programming "done right" should be self documenting ("oh, I see this routine requires this parameter to be non-zero"), and throw exceptions back up to the ui/app layer so that unmet assumptions can't be ignored by the app programmer.
For many-layered code bases, there needs to be a level below which data is assumed to be valid and can go unchecked (at least in release builds) - otherwise a single check will be performed many times from a single high-level call.
For example, I would had a stored procedure for validating the data.
Or I would have made the smartest decision of using Postgres and live happily there after.
How does using a decent DB remove the usefulness of stored procedures?