I am not sure I buy that lazy evaluation is too costly either, as it is actually what buys performance gains in some areas. That was a broad statement and it doesn't hold everywhere but I just want to say that it isn't cut & dry like you make it sound.
I would say the only argument he makes against Haskell that I totally agree with is its inference engine. I have very mixed feelings about type inference. It is very nice because it makes everything so concise, yet I often find myself writing function types just for clarity. Also type errors are not fun to track down as he shows in the slide.
Please note I said "nervous", not "I would never consider it ever". Another couple of years of development and I might be less nervous; Haskell has been making a lot of progress lately. Some stuff in the absolute inner core might still be C or Assembler, but I could see the bulk switching to some other language.
These days it's hard to even buy a computer with less than a gig of RAM, but the Playstation 3 - the most powerful console of the new generation - has only 256 megs. Granted, they're 256 really fast megs, but it's still a tiny amount of space by modern standards. You don't even have a hard drive to swap to when you run out of space - if a memory allocation fails, you work around it by hand or you crash.
If you're going to write a game engine that's meant to run on consoles, you really need fine-grained control of all the memory allocation in your system. That'd be my big worry with Haskell, more than speed or garbage collection latency or whatever.