Hacker News new | comments | show | ask | jobs | submit login

Firstly, it's basic DRY principles. Rather than have code dotted around your system dealing with data integrity issues, put it in one place and make your business logic more readable and easier to maintain. When trying to understand a subroutine or Class I should not have to wade through all the lines of code that try to make sure the input is 'safe.'

Secondly, something I've noticed about defensive programming is that generally it creates a culture of "meta rules" that only exist in the system architect's head. When someone else reuses one of their models, then the danger is they don't know all these different rules because they are scattered around different controllers or scripts. This has been a particularly vicious source of bugs in our legacy systems. Domain experience becomes critical.

Lastly, defensive programming by its nature is also traditionally a big red warning sign that says "there is no data integrity strategy in place for this system." It doesn't matter if that data is coming from a CRUD form, a relational database or a file... there should be some validation layer that ensures the data is 'correct' before making it 'safe' to reuse across the rest of the system.

I've found that defensive programming is particularly rife in systems powered by MySQL databases. MySQL has traditionally lacked the data integrity mechanisms of Oracle or other Enterprisey systems and thus has created a whole bunch of MySQL 'experts' that actually don't know anything about data integrity. A system I'm working with this very minute has a function where the first 20 lines of code (out of 28 total) is compensating for potential orphan entries in the database. The reason? The admin tool for removing entries does not do cascading deletes. Rather than fix the root of the problem, we have this 20 line check everywhere that we need to access the data safely.

I could go on and on, but in short: defensive programming is one of the major 'code smells.'




Reddit links here, So they have some good comments as well for anyone reading on

http://www.reddit.com/r/programming/comments/8cm8w/defensive...


I draw a line between defensive programming and paranoid programming. Paranoid programming looks like defensive programming run amok, and you've described it rather well.


Maybe I have a different idea of defensive programming. But for me, defensing programming is making sure that when you call a method (or something else) the data is assumed to be good or else it will go KABOOM. This is for example asserting for all the arguments that should exist and not for if (something) return;

For example, if I call a Draw method on an object, I assert if the object Color property hasn't been set. I don't create a default color for it. This makes sure that the Draw will work as it should, it will make no assumptions on how it is being called it just makes sure that when it was called it will have to be valid.


Yeah, there's two different things you could call defensive programming: first is asserting that code assumptions are true (and crashing loudly when they're not), second is trying to work properly with bad data.

I find the former really useful in quickly finding bugs in my code and documenting the function input assumptions. In general, I'd rather my code fail early and visibly (e.g. crash in debugging environment, return error and log the problem in production environment) than produce garbage because the input was garbage.

I agree the latter bad for defending against inconsistencies in internal data model or as a "precaution" when using API you don't trust, but it's still vital if you're dealing with data from outside world, when you do want to try and behave sanely in possibly damaged input (e.g. not ignore the entire RSS feed if an item from it doesn't have pubDate set).


I agree with your conclusions but for different reasons.

Defensive programming "done right" should be self documenting ("oh, I see this routine requires this parameter to be non-zero"), and throw exceptions back up to the ui/app layer so that unmet assumptions can't be ignored by the app programmer.

For many-layered code bases, there needs to be a level below which data is assumed to be valid and can go unchecked (at least in release builds) - otherwise a single check will be performed many times from a single high-level call.


'The admin tool for removing entries does not do cascading deletes.' Don't blame MySQL for that....Innodb tables have that feature and MySQL has been supporting InnoDB tables for a while: http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-co...


Which is why MySQL creates the other database format by default.


I agree about defensive programming, but I think that systems are really good to the degree that there are meta-rules which are predominantly in people's heads. The thing is, the meta-rules should be few and obvious in the code. You should be able to look at it and say "well, it looks like X happens here, so I if I add some more X I definitely shouldn't put it with Y."


Defensive programming should be done like a process. Start with all the checks in your code and finally move the checks to a centralized place so that you have reliable exit.

For example, I would had a stored procedure for validating the data.

Or I would have made the smartest decision of using Postgres and live happily there after.


For example, I would had a stored procedure for validating the data. Or I would have made the smartest decision of using Postgres and live happily there after.

How does using a decent DB remove the usefulness of stored procedures?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: