It's also notable that a lot of companies who outsource new chip development still have substantial ties with the people building their foundries and maintain significant capabilities for manufacturing. IBM for instance, could probably run enough of its own fabs again to not rely on fabs from others if it really wanted to.
There's also a small handful of fabs owned by governments and universities.
And while the trend lately has been towards fabless companies, companies who do their own stunts and run fabs are pretty valuable because of it and eventually it seems likely an area some companies may want to play with again.
For instance, can anyone think of a company with billions in cash reserves, a huge bent towards owning as much of its own supply chain as possible and a need for a lot of chips?
There are lower-end and higher-end microcontrollers moving to more advanced processes. And due to the unique manufacturing requirements of those(building flash memory on chip), only a few companies can manufacture those.And this especially true since micro-controllers don't make that much money for fab owners.
You see the same story in medium-performance-asic's, fpga's, memories ,etc.
In the end, even when you don't have to move to newer(and rarer) manufacturing processes , it give you better prices, lower power and more capabilities. So it tends to happen.
infeneon manufactures cortex-m0 on 65nm.32-bit , price: 0.25euro-1.25euro. Included for example a math co-processor for better motor control.
Renesas manufactures higher end microcontrollers on 40nm and working on 28nm.
Marvell does 55nm microcontroller with large ram, which it targets to internet controlled home appliances.
Samsung can't just divert capacity away for their own use if Apple basically owns it for N years.
With the most advanced fab technologies there's a much smaller number of companies that can both afford the initial capital investment (these plants are _insanely_ expensive) and the engineering expertise / tools to even design at these processes. Meanwhile, the $3 microcontroller I buy that runs as 12mhz fits every requirement I have while being manufactured at a cheaper, but much larger process. The high end processors and FPGAs however, benefit much more from smaller feature sizes (and those FPGAs cost accordingly).
As far as I know, when these 4 companies make advancements, the entire industry benefits. For example, Nikon comes out with a more advanced stepper for 14nm processes, Intel is the one paying the premium for them, and later the rest of the industry can benefit from a more reliable 65/45/32nm process after the cutting edge companies have moved on. I believe this is true for the EDA tool space as well.
The last paragraph is a bit of speculation on my part. I'm not in the semiconductor field but I know a bit about it, and logically it makes sense for tools / equipment. It does not include things like internal knowledge about manufacturing reliably on a certain process however.
Could even have military applications (com gear is fried on an intel sensitive mission, and exfil is not for another week or two)…
One could envisage a machine that takes a silicon blank and blasts it with a steerable beam of dopant ions, similar to a small scale Cathode Ray Tube, with the silicon in place of the phosphor. A CRT is feasible to DIY [1,2]. What about an ion implantation device? Okay, it won't do 14nm features first up, but could it make a working transistor??
Also this group, https://code.google.com/p/homecmos
One advantage over a CRT is that there is no need to view the "screen", so equipment can go under the Si target. Maybe a very sharp probe, meaning it has a very high electric field intensity at its tip? The probe could be shifted x-y using piezo actuators. If the ion source has a very small physical extent, there would be essentially a "single" electric field line running from the ion source, though the target, to the under-target probe. (Does Si allow E field lines though like this?) The emitted ion would follow the electric field line, crashing into the Si target before reaching the sharp probe. Moving the probe would scan the electric field line, and the point of impact across the surface of the wafer. Alternatively, keep the ion source and probe stationary and move the wafer x-y.
 small meaning that the spread of the field lines is less than the desired feature size.
But i was thinking along the same lines by maybe instead of the probe moving, one would move the wafer from one end to another and using an applied magnetic field lines to accelerate ions to the wafer as it moved.
What i'm picturing in my head right now is you have two metal coiled cylinders (inner and outer) with the space in between is where the wafer will move through. Now if i could only think of where the ions can be fired in order to follow the magnetic field lines inside between the cylinders to surface of the wafer (theres probably a better setup, but trying to figure out the direction of B and E fields to be able to do something like that in my head is too much for me atm).
I think what i'm getting at though is that it might be easier to get nm distances by moving the wafer through B field lines in x direction than moving an E field line x and y.
Though the numbers I've typically heard are around 3-4 Instagrams.
Now, once there really is a monopoly problem, it's still possible for smaller chip makers to emerge with less efficient or powerful chips, if the market is desperate for "fresh blood".
The only possibility of more competition in the space is if an upstart is able to emerge when the next generation materials/processes emerge. Maybe carbon. Otherwise, expect the number of fabs to go one direction: down.