In solving these problems you're creating an ad hoc search runtime that is likely slower or buggier than the well tested search runtimes in constraint programming implementations.
The only things that really matter are your constraints for reducing the search space and the search strategy you use. When using finite domains (+,-,<,>,*,/ on integers) the implementations provided by constraint libraries are faster than anything you can easily create. There are a variety of search strategies already implemented (and that you can also tweak).
The likely explanation for the inelegant solutions in topcoder and this challenge are that constraint programming is hardly mentioned in undergraduate cs curriculum so even top students are unfamiliar with it.
Please read through the discussion around the winning entry, including what the winning coder and other participants have to say: http://news.ycombinator.com/item?id=1168289
The problem is one of combinatorial optimization (as are most of these 'hard' competitions), but the tools these students always start with are the slgorithms they learned from the algo courses. Constraint programming/automated planning are modern techniques used exactly for combinatorial optimization.
The winner's heuristics would have been part of the constraints and so would have been the first thing he tried and would be faster because they would have been pruned from the search tree and the solutions not evaluated (incidentally, this is the difference between constraint programming and prolog, in prolog you generate, as he did, and then test, while constraints prune and test). The iterative deepening search is known as breadth first search in prolog/constraint programming and would also be one of the first things you try.
The likely explanation for the inelegant solutions in topcoder and this challenge are that constraint programming is hardly mentioned in undergraduate cs curriculum so even top students are unfamiliar with it.