Have you quantized it?
I'm running in Windows using koboldcpp, maybe it's faster in Linux?
That's correct, yeah. Q4_0 should be the smallest and fastest quantized model.
> I'm running in Windows using koboldcpp, maybe it's faster in Linux?
Possibly. You could try using WSL to test—I think both WSL1 and WSL2 are faster than Windows (but WSL1 should be faster than WSL2).
Have you quantized it?