I get it that any data we send to an external service is only as secure as they make it, and we can't have full insight into how well they're secured.
That's fine; we can be careful about what we ship out for processing (stats, errors, log data, etc. can generally be reasonably sanitized).
The main issue we keep hitting is that these services resolve to a constantly shifting pool of IP addresses.
It's basic server security to lock down what external IPs our servers can connect to, isn't it? I've watched server hacks in action; they almost all start by expanding a toehold of access by methods that often involve making outgoing connections -- fetching/running pre-written scripts that try a laundry list of potential ways to ratchet up access, install back doors and remote shells, etc.. Or try to grab password files or other sensitive data and ship it out.
In the past, this was a no-brainer -- outgoing connections would only be needed in a few special cases (like contacting repos for upgrades), those special cases could be whitelisted, and that was that.
But now I'm running into more and more cases where services can't even give us a list of the IPs they use; they don't even seem to really know when new servers (with new IPs) will be spun up and suddenly their domain will start resolving there.
We've muddled through for now by setting up our own relay servers... but now those will need failover as well, to be resilient.
Does everyone just allow any outgoing connections these days? Or is there some better solution?