It would seem like the age of a civilization and the size of its population (i.e. overall "human" capital) could mitigate a relative weakness in individual intelligence. Or perhaps another race's willingness to cooperate in large numbers. Or perhaps the motivation of civilizations to advance (e.g. a civilization that faces many existential threats may advance faster assuming it survives relatively unscathed; or maybe some other intelligent beings would be more inclined or less inclined to pursue civilizational advancement).
I wonder if the relationship between societal/technological advancement and individual intelligence is not linear or even logarithmic; rather, it's important to have "enough" intelligence to engage in abstract inductive/deductive reasoning (maybe the ability to do mathematics beyond basic arithmetic), and after that level other factors such as those mentioned above are bigger contributors to civilizational advancement.
We humans have far surpassed what a single individual can achieve by breaking problems into smaller and more intellectually manageable parts, which should be particularly evident to those us on HN working on software. It seems possible that the biological (biophysical?) limit for intelligence is far outstripped by the amount of complexity in the universe, making the civilizational utility obtained for marginal gains in individual intelligence rather low (i.e. since heavy abstraction will always be needed, the size of the "abstraction chunks" is just one of many other factors). On the other hand, we on HN are also well-positioned to observe the efficiency lost when systems progress to the point where no single human understands them, so I see the counterargument to my own points as well.
It would be fascinating to live in a time where the answer to this question would be knowable, but I don't have my hopes up :)
edit: it's actually a link to a facebook note
https://twitter.com/neiltyson/status/1158295335799873536 -> https://www.facebook.com/notes/neil-degrasse-tyson/tweetstor...
"To have a theory of mind is to be able to attribute purpose, intention, beliefs, desires, and other attitudes to both oneself and another person or animal. In order to test whether Sarah could understand that people had thoughts that differed from her thoughts, she was presented with short video tapes where a human actor in a cage was trying to perform a task, like trying to get some bananas that were inaccessible. After watching the video Sarah was shown two pictures, one that would allow the actor to reach his goal (a box) the other not (a key). She successfully solved the problems for the actor.
But there was some concern that she was putting herself into the position of the actors, which would be a pretty exciting cognitive feat on its own, but wouldn’t show that she attributed attitudes to the actors. So she was presented with more videos, one in which the actor was her favorite caretaker and another in which the actor was someone she didn’t really like. Sarah selected the right responses more often for the actor she liked, and the wrong responses for the actor she didn’t much care for."
Luckily blocking all 1st party scripts fixes it, and makes the page a lot faster to boot.