This is rarely the case unfortunately, and for most of open source prebuilt software you use, you rely on trust and not on audit.
Trusting trust is one of the seminal talks in software.
Open Source vs Closed source is not where I or the security professionals I know put most trust emphasis. I would enthusiastically trust something closed from Google over a rando open source project.
But back to the original point, even the most basic audit steps are the same on an open source project vs closed one. Observe what the binary does & inspect it for standard patterns.
I think having a trusted compiler is an important first step to trusting software, even if you have to analyize it in depth yourself.
But note you are now adding a lot of extra preconditions that are largely not available.
The counter argument is reverse engineering & black box audits are actually easier than getting the conditions right to trust code audits. As a bonus they work regardless of the code availability.
My original claim, that I stand beside, is that code audit-ability for security purposes is not a reason to prefer open source software. For all the reasons this thread points out, that is just as fraught as auditing closed source software. Further, a competent audit of the software would not look much different between open and closed source projects.
Absent a competent audit, there are lots of other factors that are higher on my (and many more knowledgeable peoples) lists for importance to security and privacy than open vs closed source. Things such as documented and approved algorithms, the team involved, the amount of legal backing, the market incentives etc.
That is not to say there aren't reasons to prefer non-Google based API or to prefer open sourced software for other reasons. Just security audit-ability is a bad one.
I never said that. Come on, dude.