There are many ways attackers & hackers 'verify' black-box systems regularly. For example: Cause effect Graphing – This technique establishes a relationship between logical input called causes with corresponding actions called the effect. The causes and effects are represented using Boolean graphs. In fact it is precisely due to the desired side effects that allow attackers the ability to detect, and reverse engineer the system. This is not easy, but it is not at all undetectable, or irreversible.
Black box systems imply "here's the box, it has input and it has output, use that to understand it without opening it". Social networks are not like that. You don't have all the inputs nor all the outputs. You have a tiny sample of both, on a live system that constantly receives input and produces output, in a feedback loop too, and virtually no way to verify your sample is not a biased sample.
In that sense, a black box system is trivial in comparison to something as big as a social network used by hundreds of millions or billions of people.
Social networks, due to their nature also have "eventual consistency" meaning it's unclear when something should occur (and if it'll ever occur) as an effect of a cause. And it also has probabilistic behavior, where things happen sometimes, depending on which server you hit, with what state, what version of the software, and internally it literally does things with a PRNG often (it samples its own input), so there's that as well.
PSA, the same applies to Twitter. The code Twitter have open-sourced doesn't include the machine learning neural networks they use to rank tweets, which is by far the largest indicator for ranking together with direct engagement.
In other words, take this tweet by Elon Musk as... well... most of his tweets. Doing PR selling a flashy, idealized story, that falls apart behind the flimsy façade. Sure, algorithms mean we can be manipulated. And this applies to all mainstream social networks.