The LoRa weights for 13B are on huggingface ( https://huggingface.co/samwit/alpaca13B-lora ), it should be possible to follow the instructions linked in the Alpaca.cpp readme to merge those weights into the base 13B model, then just just follow the usual Llama.cpp conversion and quantization steps
Note LoRa fine tunes aren't the same as the original Alpaca, LoRa results in some performance loss (although how much isn't clear)
Note LoRa fine tunes aren't the same as the original Alpaca, LoRa results in some performance loss (although how much isn't clear)