Reverse engineering the firmware for an embedded product where someone lost the source code.
Bonus points available for:
* "the source control is ZIP files on a network share"
* "yeah we use forced squash commits on everything to keep the Git history nice and linear"
* "it was designed by a contractor who is now uncontactable"
Too often companies pay 6 digits for a feature that some supplier rips directly from an open source on the Internet (often GPL) and then sells as his own.
That depends on your definition. Many people, myself included, take 'red team' to mean -> attack simulation. If you have access to source, it implies a white box test, which is not an attack simulation but 'ordinary' vulnerability research.
The concrete difference between the two is that vulnerability research is mostly focused on the technical security aspects. Eg. is there a buffer overflow here yes or no? From an efficiency perspective it makes no sense to hide the source code or even credentials from the pentesters performing this research.
An attack simulation is more holistic in nature, the question becomes "can your security team detect when we exploit this buffer overflow?". The blue team and the red team do not share details, and to give the blue team a proper exercise they are often not even informed. To do a proper red team exercise the scope must be very broad. Both technical controls as well as procedural operations are in scope. If you call application/network security research a red team exercise I think you're doing it wrong.
So a red team, in the sense of the word that I specified, does not have access source code, and most definitively sometimes needs to reverse engineer binaries.
Because although you don't have source code (like other commenters are saying), reversing a program to get into a company would be the hardest way to go. Red teams are used to test a company's overall security, and reversing normally wouldn't make sense compared to phishing, using common exploits, and owning the network. Reversing binaries is not the job of a red team, but pentesters of specific systems.
Red teaming isn't limited to "get into a company" testing of networks, it's also used for testing products and infrastructure that's outside the company. For example, you can reasonably have a red team evaluation of some authentication or payment infrastructure based on smartcards or mobile apps, and that'd inevitably include reverse engineering of all the artifacts that are available to the users; and in such cases also likely that many/most software parts of "your" product or device aren't made by you but redistributed from some other vendor, and you don't necessarily have the source available for that.