Sure. Well, not even with continuous deployment. We deploy infrastructure changes once a week. We have two teams (SRE & infrastructure) working on our Ansible repo. We currently commit everything into the master branch once the PR has been merged (we are on GitHub).
Now on Friday I will compile a list of ready-to-go commit for next week. These changes will be moved to staging, then to production. However, I am seeing pain managing the release process because
* sometimes a bug fix is only required in one environment (could just be production), but we still merge into master.
* we can make a weekly release tag, but then we have to merge hot fixes in. okay, not a big deal, but this happens
* we also have changes which affect global deployment (for example logstash filter files are globally used, not versioned by environment). If someone want to test a filter change in dev and only in dev for whatever legitimate reason, we will still have to push that change to production. However, this is a bad practice - I do not like pushing changes because they are part of the tree.
I thought about branching and make use of GitHub tags to help identify scope of changes (dev? stage? prod? all?), and identify component affects (right now I have to read the commit to really understand what is being changed...). But maintaining dev, stage, prod branch is costly too; I have to cherrypick commit into different branch.
So here I am with weekly release and I feel pain, I can't imagine myself doing CD (as frequent as one day at least) any time soon.
Configuration is perhaps more complicated than binary deployment. With binary deployment, you can end up with, say, only the version in production, and then version that is about to be in production - the "old" and then "new". If you've made it from dev to staging with one binary, and you discover a bug, you go through dev and to staging again, just with a new "new" binary.
Configuration, especially configuration management, often needs a more staged/tagged approach (in fact, you may have moved from having n custom builds to having one build with n configurations). You turn on a feature for some people, for one cluster, for all clusters of one type (say, v6-only clusters), and so forth. The potential combinatorial explosion is huge.
For the feature-flag case, you can use a canary approach, at least.
It's a lot harder to canary a change on one of your two (or four, or whatever) core switches, though.
A pattern I've seen is to move from a single weekly deploy of disparate changes (say, server config management, switch port config, switch ACL config, ...) to multiple smaller deploys (potentially done by fewer people) based on the type.
One "nice" thing about infrastructure is that most problems are fairly immediately apparent. There are also generally a lot fewer integration-style tests you need to consider. You can detect failures and roll back quickly. Unfortunately, you've usually had a huge impact when you fail. And it's also relatively hard to verify your change before you land it.