IMHO success proves Patrick right.
I don't see how it would have. The whole point of making it more professional would have been to make it less cheesy. If you fail to do that, then you've failed to make it "more professional".
> IMHO success proves Patrick right.
Patrick's success has primarily been in the written word. This is his first foray into selling video-based content, and I think he has quite a bit of work to do on his presentation. Like most nerds, he's quite awkward on camera, but it's nothing that can't be fixed with some elbow grease.
Of course people will cite his Business of Software talk as a counterexample, but that was highly rehearsed (something he mentioned on HN) and consequently felt polished (yet still a little awkward), but the video on training.kalzumeus.com doesn't feel that way. Moreover, having to heavily rehearse every presentation you give does not scale very well when you're creating a series of them to sell online.
Some things that could've been easily improved in the intro video include wearing a proper suit (or at least a collared shirt) instead of a red tracksuit, properly styling his hair, wearing contacts instead of glasses (due to the reflections on them), not wearing an enormous geeky headset, removing the audio "booms" that frequently occur in the video, not being in a 和室 (Japanese-style room) and dropping the super-tacky gimmick with the $100 bill.
Actually, A/B tests prove him right. Or occasionally wrong! But that's the point of such testing.
He mentioned he has an A/B test going between video/no-video. I imagine that, if the video version wins, he could consider testing 2 different videos (time and expense permitting).
You can see the A/B testing in action by visiting these two links, which explicitly include/exclude the video, on the first link:
Regardless, A/B testing is not a panacea. Is it a good technique? Sure, in many situations. But just like with anything else, you have to take it with a grain of salt.
I think what he is trying to say is that Phil didn't test the higher order interactions enough, i.e. he didn't check that the effect that some pair of action have when done together (and the effect of triples, etc.). This is very important to note, and a good point. (I think one should preferably run a fully crossed factorial experiment, so that every combination is tested.)
But it is just saying "don't do A/B testing wrong", which is obviously true. I think a lot of problems with A/B testing are caused by people who don't have much statistical knowledge missing some of the subtleties that comes with any experimental design.
(On that note, there are many articles about why one needs to be careful using A/B testing, which do have the backing of statistics.)
: http://www.evanmiller.org/how-not-to-run-an-ab-test.html, http://www.cennydd.co.uk/2009/statistical-significance-other...
Right. Deliver an outstanding product right from the beginning with your very best work. That's the strategy I use. A/B testing, while important, is often overstated.