Hacker News new | past | comments | ask | show | jobs | submit | youssefabdelm's favorites login

The LoRa weights for 13B are on huggingface ( https://huggingface.co/samwit/alpaca13B-lora ), it should be possible to follow the instructions linked in the Alpaca.cpp readme to merge those weights into the base 13B model, then just just follow the usual Llama.cpp conversion and quantization steps

Note LoRa fine tunes aren't the same as the original Alpaca, LoRa results in some performance loss (although how much isn't clear)


Is there a diffusionbee or mochidiffusion equivalent apps for this yet?

Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: