I'm a little surprised by the lack of the solution or deeper analysis. It seems like there are some hints towards figuring out the problem.
To more closely confirm the actual reason, the easiest bet would be to prompt further on rejection, or adding to the prompt "If you can't do something, please indicate why you can't"
I am almost certain that it's because of the wording of the prompt. I don't think it's that the prompt is "too complex." Instead, the prompt itself is self-contradictory.
It tells the model it is to summarize, but at the end asks for it to write a blog post. It gets confused and just says that it can't do that because it is acting as a "summarizer" and not a "blog writer"
A: No. He was using a weird prompt which was sometimes causing problems.
Panic over.