In the data science world, there is anaconda, For the enterprise, there is black duck software, myget, libraries.io and the commerical variant, and a few others.
My internal checklist:
1) Is the license OSI approved (IP indemnification and IP taint is a risk)
2) What's the community like for it (is it well used, do security incidents get tracked handled quickly)
3) What security assurance have they done (some OSS has funders who have paid for testing, what kind of test suites do they have).
4) Add security alerts for the OSS to my RSS feeds to help monitor
5) Enforce a policy to sync to upstream pretty frequently as many OSS security bugs get silently fixed
If I don't have confidence at this point, I will have some static analysis performed (lots of tools here) as a last measure sanity check. I know lots of bugs won't be uncovered by that, but it's an indicator of development goodness.
Would love to hear what others are doing as we are a small shop and use 1000+ OSS packages.
My plan is to move to Black Duck or the commercial libraries.io subscription for my automation needs.
For Kubernetes I'm really impressed with Aqua Security as they do OSS license adherence and OSS security vuln alerting if you only deploy into containers. It's not a cheap product, but love how they do OSS assurance as part of build and deployment. It's a nice model that allows a central security team to use technology to enforce policies - good for "research" environments where dev/researchers don't want to do any effort for OSS packages.
Do they have a security disclosure policy? A dedicated security mailing list?
Do they pay bounties or participate in e.g Pwn2own?
Do they cryptographically sign releases?
Do they cryptographically sign VCS tags (~releases)? commits? `git tag -s` / `git commit/merge -S`
Downstream packagers do sometimes/often apply additional patches and then sign their release with the repo (and thus system global) GPG key.
Whether they require "Signed-off-by" may indicate that the project has mature controls and possibly a formal code review process requirement.
(Look for "Signed-off-by:" in the release branch (`git commit/merge -s/--signoff`)
How have they integrated security review into their [iterative] release workflow?
Is the software formally verified? Are parts of the software implementation or spec formally verified?
Does the system trust the channel? The host? Is it a 'trustless' system?
What are the single points of failure?
How is logging configured? To syslog?
Do they run the app as root in a Docker container? Does it require privileged containers?
If it has to run as root, does it drop privileges at startup?
Does the package have an SELinux or AppArmor policy? (Or does it say e.g. "just set SELinux to permissive mode)
Is there someone you can pay to support the software in an enterprise environment? Open or closed, such contacts basically never accept liability; but if there is an SLA, do you get a pro-rated bill?
As far as indicators of actual software quality:
How much test coverage is there? Line coverage or statement coverage?
Do they run static analysis tools for all pull requests and releases? Dynamic analysis? Fuzzing?
Of course, closed or open source projects may do none or all of these and still be totally secure, insecure, or unsecure.