On the contrary. Every experiment is backed up by some sort of hypothesis. (Otherwise it's not an experiment. It's called "divination": You're noodling around with some equipment to see if something "interesting" turns up. And it always does. The human mind is very good at finding "interesting" patterns in random noise.)
Hypotheses don't always have to be in accord with the body of existing theory. When Planck invented quantum mechanics, it was essentially just a mathematical trick that made no sense in the context of Newtonian mechanics. It was a free-floating rule of thumb: Gosh, if we pretend that light comes in these "packets", we can compute the observed experimental result! And that experimental result was so solid, and the fit between quantum theory and the result so perfect, that eventually the rest of physics had to be unraveled like a sock and reknit around that one hypothesis.
But the hypotheses that are connected to theory are much stronger. (They gain strength from all the experiments that went into confirming the theory.) If one's hypothesis (e.g.: "you can get fusion at room temperature because of mumble mumble catalysis mumble the influence of unknown impurities in the palladium mumble mumble") doesn't manage to score a big experimental confirmation (e.g.: making enough heat, consistently, to boil one's tea) or to connect itself to the body of existing theory... it's very vulnerable. Because there's a much simpler, well established, experimentally-supported theory to explain these results: These folks have a hypothesis ("cold fusion exists"), they do the experiments, they cherry-pick the results which look good and they find reasons to discount the ones that look bad. They need not even be doing this consciously. (Believe me when I tell you: This happens all the time. In every lab I've ever seen.)
Finally, when you ask me to avoid "derision", it suggests that you don't understand how science really works. Derision is the pillar on which science is built. (Though we usually prefer the politer term, "skepticism".) The central principle is "I don't believe anything which hasn't been replicated by a skeptic, because people are too trusting and hopeful and are blind to their own mistakes. Frankly, your equipment is probably broken and your students are probably ignorant; I only trust myself. And, come to think of it, I don't even trust me very much -- I should convince my skeptics to replicate my results so that I can believe me."
Of course, the best scientists manage to think this way while also appearing as kindly, smiling, eternally optimistic sages. ;) You wouldn't think, to look at Einstein, that he had balls of steel, a big ego, and a vast reserve of skepticism, but he surely did.
You aren't a skeptic. You have already made a decision that this palladium/deuterium nuclear effect is invalid. Since you've made that decision without looking at the most current data, that means your decision is based on nothing but your faith that that process is broken.
On the contrary. Every experiment is backed up by some sort of hypothesis. (Otherwise it's not an experiment. It's called "divination": You're noodling around with some equipment to see if something "interesting" turns up. And it always does. The human mind is very good at finding "interesting" patterns in random noise.)
Hypotheses don't always have to be in accord with the body of existing theory. When Planck invented quantum mechanics, it was essentially just a mathematical trick that made no sense in the context of Newtonian mechanics. It was a free-floating rule of thumb: Gosh, if we pretend that light comes in these "packets", we can compute the observed experimental result! And that experimental result was so solid, and the fit between quantum theory and the result so perfect, that eventually the rest of physics had to be unraveled like a sock and reknit around that one hypothesis.
But the hypotheses that are connected to theory are much stronger. (They gain strength from all the experiments that went into confirming the theory.) If one's hypothesis (e.g.: "you can get fusion at room temperature because of mumble mumble catalysis mumble the influence of unknown impurities in the palladium mumble mumble") doesn't manage to score a big experimental confirmation (e.g.: making enough heat, consistently, to boil one's tea) or to connect itself to the body of existing theory... it's very vulnerable. Because there's a much simpler, well established, experimentally-supported theory to explain these results: These folks have a hypothesis ("cold fusion exists"), they do the experiments, they cherry-pick the results which look good and they find reasons to discount the ones that look bad. They need not even be doing this consciously. (Believe me when I tell you: This happens all the time. In every lab I've ever seen.)
Finally, when you ask me to avoid "derision", it suggests that you don't understand how science really works. Derision is the pillar on which science is built. (Though we usually prefer the politer term, "skepticism".) The central principle is "I don't believe anything which hasn't been replicated by a skeptic, because people are too trusting and hopeful and are blind to their own mistakes. Frankly, your equipment is probably broken and your students are probably ignorant; I only trust myself. And, come to think of it, I don't even trust me very much -- I should convince my skeptics to replicate my results so that I can believe me."
Of course, the best scientists manage to think this way while also appearing as kindly, smiling, eternally optimistic sages. ;) You wouldn't think, to look at Einstein, that he had balls of steel, a big ego, and a vast reserve of skepticism, but he surely did.