One of the Flower maintainers here. The code example is primarily meant as a demonstrator to show that it's possible to fine-tune these models in a federated way on devices as small as a Raspberry Pi 5.
The bigger takeaway is that we're close to being able to train/fine-tune models with much better performance by accessing vastly more data on the edge, in a federated way.
The bigger takeaway is that we're close to being able to train/fine-tune models with much better performance by accessing vastly more data on the edge, in a federated way.