In ultra-simple terms you need to have a network capacity greater than the attacker and to identify the attack requests and discard them whilst still honouring valid requests.
That is basically what CloudFlare is.
Add in things like caches to prevent even valid requests from getting to the backend (so you've now added a CDN), and many peers to your network so that an attacker cannot saturate one or two peers... and you've got the essence of CloudFlare (sans features like optimising content for speedy delivery).
Ultimately the best defence to a DDoS is to be able to soak up the attack before it hits the backend, and to have enough spare capacity to keep serving regular traffic.
You can cache and distribute everything save for the valid requests to a dynamic resource (but even those you can optimise). So the whole game from a defensive point of view is to let nothing but valid dynamic requests through to the backend.
The attacker's side of a DDoS is about acquiring network capacity greater than your network capacity, identifying your weaker points (in an attempt to cause a domino effect, if they can take out a weaker peer then a stronger peer will need to do more work and itself becomes weaker). And then sending what appear to be valid requests without triggering an attack on themselves (having a 150 byte request respond with 100KB would wipe out the attackers). Bonus points for constructing requests that can get through to the dynamic resources on the site being attacked (as those are the weakest link).
This is a very good explanation but there is another side to this issue. (I`m talking about HTTP DDOS)
Basically, DDoS Attacks can be (roughly) divided in 2 categories:
1. Attacks on your server (which usually target your server IP or some other part of your network infrastructure)
2. Attacks on your site (which use bots to flood your site with fake HTTP requests)
As explained above, Network attack can only be countered with strong and flexible infrastructure. The most common solution is a combination of several high-powered servers and load balancing capabilities.
HTTP DDoS attacks are trickier because they're best mitigated by visitor profiling, a technology that can help identify bots from humans and block them while still allowing providing full access to all legitimate visitors.
Developing and maintaining such technology is arguably more complicated, simply because it's a software you need to create, not a hardware which you can buy.
Standard profiling solutions include CAPTCHAs and Delay Pages but these will also repel legitimate visitors. (because no one likes CAPTCHAs or waiting for 5-10 extra seconds for page load).
Advanced profiling solutions use a combination of behavior and signature recognition, coupled with seamless challenges (i.e. checking for JS support).
CF and Incapsula (where I work) both handle Network DDoS in a similar manner but we have a somewhat different approach to HTTP DDoS.
And yes, while under DDoS (or even without it), dynamic resources can be the "weakest link".
This is why WAFs are so important.
I probably should have clarified that by caches I mean reverse proxy caches that can take up the work of serving static resources from the network edge.
The combination of adding caches and distributing those caches is to add a CDN.
You add caches to stop the request reaching a backend and doing the work twice, for optimisation. But in effect they become defensive shields as serving a static file or an in-memory file is less work and can be handled in far greater numbers than doing the work on the backend, and if one cache is attacked users accessing other caches elsewhere in the world continue to get their requests served.
If you then place a cache at every point at which your site is surfaced, for example you use DNS anycast to have your front-end appear to be surfaced from every Amazon datacenter and the closest one is nearly always selected... then you've helped stop requests at the first opportunity and to return them from a place which can handle far greater requests.
You've increase your network capacity, increased the ability to serve valid requests, and you've prevented all of that traffic reaching the backends.
And in doing all of this... placing caches for static resources throughout the world and using DNS anycast to return the cached item from the closest peer... well, you have created a CDN. A primitive one for sure, but it still is one.