Keeping track of colour - in a way where correctness is guaranteed by the computer - is such a useful general feature that any serious modern programming language should support it. We should have easy, simple syntax for specifying colour, and should make polychromatic functions almost as easy as monochromatic ones.
When you do that - when the red of "async" is just one colour among a whole rainbow that your program is already tracking - then the cost of making an explicit distinction between async and not is minimal, and the benefits - being able to track, control, and see instantly whether a function is async or not - are more than worth it.
(Edit: read your blog; it's clear you're leading it. Poe's Law is in effect sometimes here, sorry 'bout that!)
Are there any large C projects written in this style?
It does add a bit of boilerplate. For the indexes in particular, foo_lookup(foo, idx) is substantially safer than foo[idx.value] - but then you've gotta write the lookup functions (which may just be a macro but is still a bit of cruft). Locally, if using var.value everywhere in math is getting ugly, you can of course pull things out into temporary primitive variables - enforcing things at the boundaries between functions provides almost all of the benefit.
Any time you have side effects, you potentially have "red" code, in that it can add arbitrary restrictions on how you must use the functions.
My question is: In a language that has a first-class effect system, does the red/blue problem disappear? Does being able to generalize over effect allow you to avoid cutting the world in half, and allow you to compose effects easier?
For instance, here is an abstraction of code which reads and writes
class Monad m => MonadTeletype m where
writeLn :: String -> m ()
readLn :: m String
class Monad m => MonadNow m where
now :: m UTCTime
echoTime :: (MonadNow m, MonadTeletype m) => m ()
echoTime = do
line <- readLn
t0 <- now
writeLine (show t0 ++ ": " ++ line)
instance MonadTeletype IO where
writeLn = putStrLn
readLn = getLine
instance MonadTeletype IO where
now = getCurrentTime
I could, for instance, build a pure interpreter of those effects into, say, a stream transformer or compile it into a different language (though we'd have to do some tricks to expose Haskell's name binding).
If you want your lock obtained/lock released bit then you want indexed monads. It's easy enough to do, but the safety/complexity tradeoff in the Haskell community has landed on the other side of that... probably for more historical reasons than actual technical ones.
But that is what makes it an effect. Sure, you can model effects in a "hosted" language this way, but that's not what we mean by effects.
As another example, it'd be completely possible to write your own IO and interpret it in another language. You can read about this on Edward Kmett's blog where he talks about implementing IO for Ermine 
Of course, there's something important that makes us want to differentiate "real world" side effects from internal "model effects". Ultimately, from a correctness and reasoning POV, there ought to be no difference. From a practical point of view, some models are more interesting or important than others. But as a compiler writer you're put right back into the same hot seat.
 Unfortunately, his blog appears to be OOC right now, so here's a weird chinese mirror!