This is a mischaracterization of Berkeley's position.
Berkeley did not admit even the existence of the senses, much less that they convey anything to us, nor the unstated assumption of the above description that there's something outside our senses to be conveyed to us through the senses.
To Berkeley, everything that exists is either a perception in the mind or the mind itself.
Berkeley's writing is very accessible and clear, entertaining even. To anyone interested, I strongly recommend reading his "Three Dialogues Between Hylas and Philonous" where he leads you through his thinking step by step. I wish more philosophers wrote as well as he.
 - http://www.earlymoderntexts.com/assets/pdfs/berkeley1713.pdf
Historically it was thought that the cause of something (the origin) must share properties with the event caused (the product). This is just a false assertion. Today we understand "emergence" in which properties arise in aggregate products that are not present in their constituent origins.
Without that every argument of the form "experience is a closed system" fails: the causal origin of experience does not "need" to be experience.
Light strikes a surface, it collides with your eye, your nervous system enters a state known as "visual perception" and that has properties not present in any prior step (ie., it feels like something). These properties emerge out of the whole interaction of the system, and are not present within any mere piece of it.
Idealism is very much an epistemological virus, a metaphysical conspiracy theory, that will quickly run a wildefire through your whole belief system with its plausible soundbites. "From experience, only experience, surely?!" No.
Berekely's Master Argument is the beginning of a line of "argument by amazement at how cool it all sounds" that ends up in Heidegger. Justified by the constant refrain that the experience is an epistemically closed system, a genetic fallacy whose plausibility is only ever achieved through rhetorically pleasing soundbites.
I'm no expert in the field of scientific thought, but my sympathies fall with Morris and Weinberg and the like. I agree with Weinberg that Aristotle understood physics worse than many schoolchildren do today. Which is not to say that Aristotle was stupid, but just that your knowledge is a function of the times you live in and the volume of past human thought you've had the privilege to learn from.
There are legitimate social and ethical concerns for everyone to keep in mind with the advancement of scientific knowledge. But this shouldn't be conflated with the idea that we don't understand the natural world better than we used to. We do.
Ironically, given the fact that Morris and Weinberg both prize clarity of thought, I think they are talking at cross purposes when they critique historians of science and are not properly defining their terms.
Take this quote from the OP for instance:
"While studying at Princeton, Morris soon learned that Kuhn held in particular contempt any view of science as a triumphal procession toward a more accurate description of the universe and how it works, a view called 'Whiggishness,' from British politics. The ultimate mouthpiece of Kuhn’s anti-Whiggish position in The Ashtray is an unnamed Harvard graduate student who insists that a new paradigm is not necessarily better than the old one, 'just different.'"
But Butterfield and other critics of the so-called "Whig" school of history are not contesting the proposition that we now understand the natural world better than a cave man (or a Hellenistic Greek) did. (They're also not relativists, incidentally). Instead, they're critiquing the idea that the specific path of progress is inevitable.
In other words, there is an underlying physical reality that sets constraints on what we can know and how we know it. But within those constraints there are innumerable potential branching paths. A Whiggish take on history of science might say, for instance, that it was inevitable that the first moon landing would be achieved by a nation with Enlightenment values. A non-Whig interpretation is not saying that the moon doesn't exist, or that we shouldn't care about the Apollo program. It's pointing out that the specific course taken by our timeline of history of science is not an inevitability. It could have been otherwise.
An interesting example that someone gave from elementary mathematics is the derivation of the quadratic formula.
Someone discussing this pointed out that this derivation is easy to follow, but extraordinarily mysterious in terms of how we knew to do various things at various steps, such as mysteriously multiplying both sides by 4a or mysteriously adding b² to each side at some point. How did we know to do that?
Of course there are different explanations of the underlying motivations and the history of how people discovered this proof, but it's easy to be given the proof without any of that context, and that kind of thing is in fact the rule rather than the exception in many parts of math study.
Again, I think this was someone else's observation but I don't remember where I came across it.
A tangential point: this is indeed a horrible way to derive the formula, for the reasons you mentioned.
If anyone is curious, here's a better way to think about it. The graph of ax² + bx + c is just the graph of ax² translated. Keeping that in mind, let's investigate.
First, consider a very easy problem: find roots of ax² = 0. The graph intersects the x-axis at x=0, done.
Now, let's shift the whole graph down by Q, and solve the problem again. The equation for that graph is ax² - Q, and it intersects the x-axis at ±√(Q/a). Still easy.
Now, let's shift the whole graph again to the right by R. The equation for that new graph is a(x-R)² - Q.
What of the roots? Oh, we don't need to do much work here! The places where the graph intersects the x axis simply shifted to the right by R. So the roots are R±√(Q/a).
So, to recap: the roots of a(x-R)² - Q = 0 are R±√(Q/a).
What if our equation is written in the form ax² + bx + c? Well, now is the time for algebra. Open up the parentheses:
a(x-R)² - Q
=a(x² - 2xR + R²) - Q
= ax² + (-2aR)x + (aR² - Q)
= ax² + bx + c
Solve the following system for Q and R:
-2aR = b
aR² - Q = c
R = -b/2a
Q = b²/4a - c
Now plug these Q and R into the formula we already have: R±√(Q/a) - to obtain the all-familiar result
What the formula is hiding is the simple idea that the roots of a parabola are easily found if you know where the vertex is. So assume you do, and work backwards from there.
A deeper idea is solving an easier version of the problem, and then changing the problem back to the more general original question, refining the solution on each step.
And this is, in fact, how mathematics is often done.
>and that kind of thing is in fact the rule rather than the exception in many parts of math study.
There's work done to change it. Note that in the argument above, I could have left out all the "work", leaving only the questions, and many people would still be able to do the work. And with the right preparation, the student would be led to ask the same questions.
If you never learned the algebraic trick, would you invent it when solving this problem? It's not immediately clear from the algebra that a quadratic polynomial ax^2 + bx + c even can be rewritten in the form a(x-Q)^2 - R.
Conversely, while this technique involves algebra that amounts to completing the square, it exhibits a general technique.
If you want to say "..and that's just a special case of..", homotopy continuation methods -
- because the idea here is transforming the simpler polynomial x^2 - C into a more complicated one, and seeing what happens to the roots.
First put the equation into the form
x² = 2ax + b
Now complete the square:
(x – a)² = a² + b
x = a ± √(a² + b)
The argument in my previous comment attempts to provide motivation for such a step, starting from simpler questions. It uses the geometry of the problem, and builds up from solving a simpler problem first.
That approach also uses the notion of transformation and invariance (seeing what happens to the roots when we move the graph around, and noticing that the distance between the roots doesn't change we shift horizontally).
Again, the important part here, is that you could lead someone to ask the same questions and have them answer them themselves.
"Let's solve this equation. Looks complicated. Can we solve a simpler problem first? What would a simpler problem be? Now how can we make it a little more complicated, and how does it affect the answer?".
And that's how math is done.
After all is said and done, one can extract "completing the square" as a shortcut technique. But that's what it is - a shortcut through the woods. Learning a shortcut won't teach you how to walk in the forest on your own.
Seems like a straw man.
The quoted proposition is really two rolled into one:
1. Progress is inevitable.
2. Conditional on progress occurring at all, it has to take something reasonably close to a specific path.
The first view seems to be quite widely held. I think it's false, but the second one is true. Is the version being critiqued here, the first one, second, or both?
Would physics as a discipline exist in its current form without thinkers like Aristotle? That’s the more important question for me.
The anthropic principle states as much. I like that you used “intelligent beings” instead of humans, because I could imagine a scenario where synthetic or artificially intelligent beings would arrive to the same conclusions, upon making the same observations of their natural world.
The Janet Malcolm dispute is instructive in this regard. Malcolm's book on the MacDonald case is a classic, assigned reading for many aspiring journalists; her book is on the difficulties of the relationship between journalist and subject, and the impossibility of perfect objectivity. Morris's documentaries, sometimes downright hamfisted in their edits (he is cruel to, for example, his subjects in Fast, Cheap, and Out of Control), could be taken as an opposite extreme. Objectivity is hardly even intended.
For other readers: 'idem' = 'the same'
Radical relativism rests on Cartesian dualism. This is a philosophical doctrine that, as anyone who has studied philosophy knows, is highly problematic. In the last century or so it has been rejected by a long string of major philosophers, include Husserl, Heidegger, Merleau-Ponty, Ricoeur, Whitehead, the later Wittgenstein, Strawson, Putnam, Dewey, Peirce, and James. Kuhn, alas, apparently was unaware of this.
Also, Kuhn believed that James was right when he said that the mind of the infant is a ‘a bloomin’ buzzin’ confusion’. However, this claim was made before there had been any scientific investigation of infant perception. In recent decades there has been a great deal such research, and it has been determined that infants do perceive the world, and according to universal principles.
The short version is that any theory of observations (i.e., scientific theory) necessarily defines the observational process and therefore the observations themselves, so that it is impossible to separate an explanation from its explanandum. What's relativist about this is that the very thing you are explaining by the theory is implicit in the theory, so that what you have are a series of paradigms or theories that replace one after the other.
This isn't to say that we don't predict things better, or even that there is no reality separate from the theory, only that it doesn't matter at some level.
All of this too is sort of distinct from what Kuhn is remembered for, which is the social nature of science, which is all too real.
Scientists are often in denial about these social factors, which is naive at best and dangerous at worst. Humans are not conduits of God; they are imperfect machines that are part of the system they are studying.
My sense is that Morris (who I respect immensely for his work) is kind of fighting a strawman argument whose depths he doesn't entirely understand, because of some personal conflict he hasn't come to terms with (ironically, given the nature of what he is arguing against).
What I am saying is that all the different scientific theories are ultimately based on a universal understanding of reality that is in turn due to the universal nature of the human organism and its engagements with the real world. That's why I included the last paragraph in my comment.