I even (sort of) do this when I'm deprecating something which my team is the only user of, because sometimes it's hard to tell if something's really unused! First shut off the VPC access while leaving all the other infrastructure and data intact, wait a week or two and see if everything breaks, then get rid of everything else
There is another variant of this: if you can show that the code you are deleting never worked, there is no need to do a scream test. That is, if anyone cared about the code you are deleting, they would have already been screaming.
I would be careful with that. Maybe they did scream, but you haven't heard it, and they worked around the issue. Or maybe they did their workaround without saying anything. Or maybe you're wrong about your code not working. It actually may be working in some way that you don't know of, but is useful to someone.
To use an ecosystem analogy, once you expose your software to the world beyond your own dev environment, even internally, you'll eventually find that something colonized it - much like everything on this planet that isn't being actively and regularly scrubbed.
In my own career, I've seen cases of this. For example, once we were tweaking a little embedded database that supported a half-finished feature meant for internal use, and only then we (as in everyone in the dev team) learned that somehow, the QA & deployment support people got wind of it, and were scripting against exposed parts of that DB for a good year. And, it turns out, it wasn't the only part of the software that we thought of as incidental phenotype (or didn't think of at all), and the other team considered stable behavior.
See also the so-called Hyrum's Law: "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviours of your system will be depended on by somebody."
Re: Hyrum's law
This is why user facing code should have (at least) two classes of tests:
1) is it doing what the developer intended it to do
2) is it doing the same thing it did on the last release version with typical user requests?
That sounds the same, but it is not.
The first class is a set of simpler "happy paths" of intended specific behaviors.
The second class is like wargaming. A good way to do this is to replay user requests against your API and see that they return the same results release to release. You may also uncover interesting unintended behavior / conversations to have with users this way.