There is a well-trodden path of "little languages" evolving over time to add features. (e.g. XML build files and templating languages accreting conditionals and control flow over time). To me these are "custom" DSLs and are generally a bad thing. 
On the other hand, a DSL which leans on the language of it's implementation I consider a Good Thing. i.e. it is embedded in the implementation language, using it for control flow etc but also modifying it somehow (in a lisp, you'll get new special forms/structure/macros, in another language, you'll get a well-written API which allows easy expression of solutions to problems in the domain).
 My reasons are mostly that they tend to be ill-suited for programming (no tooling such as code highlighting, debugger support, profiling etc - actions like this must be taken in the implementing language). In addition they tend to be poorly designed/idiosyncratic, because language design is hard. Of course, if the little language becomes sufficiently popular then this becomes a solved problem.
I also note that my criticisms apply to interpreted languages implemented in C. So clearly good design and popularity can mitigate all or most of them.
But I'd say that such little languages exhibit feature creep over time. And once the two features you mention (looping + conditionals) are acquired, you've lost all the benefits you mention, since you're now turing complete.
Regexes, makefiles, ant build files and even sendmail cfg files (http://okmij.org/ftp/Computation/sendmail-as-turing-machine....) have grown in complexity from their first incarnations.
I guess my position is: "Your language is very likely to grow over time - you might as well get the basics from a well designed language with a good toolset rather than design+implement it yourself".
Note also the other way people attack this problem (lack of features in the little language) is to write "config file generators". i.e. programs in a general purpose lang which emit programs in the limited lang. That also tends to argue for the fact that such languages want to grow over time.
Most DSLs I've seen are for people who general purpose programming is beyond their daily capacity to use. It would be inappropriate to saddle decorated lisp on that person, or even LUA.
But also bear in mind that simple INI style files can be represented as data structure literals in the language. So you could replace your ini-style file:
with (python syntax):
and then load your cfg file with 'import' from python.
This approach isn't perhaps suitable for all cases, but it also allows users to do "clever" things, e.g. replacing:
Basically, I'd urge anyone considering adding anything beyond simple assignement to use a real language, and even in the case of simple assignment, I think you can use a real language without scaring the users too much.
I once worked for a SaaS provider in the recruitment industry. I was brought on to begin working on a 2.0 product, but one day they asked me to work on their 1.0 system instead.
Inside I found horror upon horror. My personal favourite was a table called TableRow_TableRow.
It contained six fields:
TableRow_TableRow had four billion rows. On commodity hardware. Without any constraints. But with an index on every field. And they wondered why it was so slow.
I was sometimes brought in to discuss the 2.0 architecture. The tech lead felt very strongly that "Everything should be called a node, and nodes should contain nodes, it will be super-duper flexible!"
My pleas that creating a graph database inside a relational database would perform horribly and be a wellspring of nightmarish bugs fell on deaf ears. My argument that perhaps we could, you know, just model the domain were dismissed as inflexible.
Then the GFC hit, the 2.0 project was cancelled and I was sacked. What a relief.
If you argument for a structured relational model, with sane names for everything, you are dismissed as some sort of dinosaur, that just doesnt get it.
In my experience, these "generic" systems are the ones that eventually end up in development hell, containing an obscene LOC in proportion to the problem being solved, and are impossible for new developers to understand and modify.
Duck Programming – http://news.ycombinator.com/item?id=3442419
Most fail horribly. I count myself lucky if I come across an ORM in some language that didn't visibly have joins bolted on the side in such a way that they just barely work, usually spending all the design value of the library in the project. (That is, merely by using one of these libraries you've often incurred technical debt on the spot.)
sqlalchemy, on the other hand, is good.
Let's take constraints as an example. Can you use them to display a list of user-friendly validation messages when someone fills out a web form? Will they benefit from all the tools (version control, refactoring, static verification) your language of choice provides? How reusable will they be?
If there were databases with language-friendly APIs for these things, I bet developers would be less likely to re-implement them.
If you put your constraints in the application code, you usually have some validation layer that the data passes through before sending it to the DB.
It's common to have validation live both in the DB and in the app code. The constraints in the app code are for catching and displaying errors up front and display them to the user (e.g. on a form), while the DB constraints are used as a last line of defense to ensure data integrity.
I was unfortunate enough to inherit a project management system running on Quickbase, and in the end it was faster for me to re-implement it in Rails than make even small changes to the workflow.
Spring handles a lot for you, but learning to use it and (especially) debug/troubleshoot it is like learning another framework on top of .NET. It even has its own language (called SpEL).