Interesting. That seems a pretty odd choice. Without a prompt format (like almost all instruction finetunes have) the model might continue your instruction instead of answering the instruction, since it does not have a delimiter for where the instruction ends and the response begins.
However, the model is instruction-tuned to follow completions (not chats). Simply tell it what you want and it should work.