If you have modern hardware, you can absolutely train that at home. Or very affordable on a cloud service.
I’ve seen a number of “DIY GPT-2” tutorials that target this sweet spot. You won’t get amazing results unless you want to leave a personal computer running for a number of hours/days and you have solid data to train on locally, but fine-tuning should be in the realm of normal hobbyists patience.
Not even on the edge. That's something you could train on a 2 GB GPU.
The general guidance I've used is that to train a model, you need an amount of RAM (or VRAM) equal to 8x the number of parameters, so a 0.125B model would need 1 GB of RAM to train.
I’ve come across this before and think it’s brilliant. Are you aware of any comparable firmware for Nikon users (not that I really have any complaints about what Nikon has provided, but this is likely a case of not knowing what I’m missing out on)?
I'm not, and that's the reason why I went Canon. There is also CHDK for cheaper Canon cameras. Canon seems to be less litigous when it comes to hacking their firmware.
>> Remember than when the AIDS epidemic broke out. The doctors and labs didn’t help much. People took things in their own hands and tried stuff, and in the end, they found things that worked.
What??? It was the FDA that blocked access, not doctors and labs. It was the doctors and labs that were trying to find treatments. Peptide T and AZT had several studies going on.
reply