I think the os image distribution is a way to reduce support and improve quality. It is much easier to check that the result is working in the lab rather than test packages and setup instructions.
One thing you could eventually do if you wanted to go the low effort route but still share it on a raspberry pi would be to build the rootfs using buildroot in a tar archive and then import that as a container. I don't really know if it would work, perhaps you would need one container per service or something, but it could be an approach.
I understand the reasons why, and they're perfectly fine. But I still think it's a bad decision to rely solely on that approach, because I ultimately hate wasting hardware.
My low effort route was to run their image in a podman systemd container, but the setup magic wasn't happy with that and some services failed and I eventually gave up debugging that. Maybe I could have setup a qemu to run it in a VM, but that requires to somehow pass through i2c and spi.
Devs might consider something like FPM (https://github.com/jordansissel/fpm) to make DEB/RPM/whatever packages that are a lot easier to install and manage.
One thing you could eventually do if you wanted to go the low effort route but still share it on a raspberry pi would be to build the rootfs using buildroot in a tar archive and then import that as a container. I don't really know if it would work, perhaps you would need one container per service or something, but it could be an approach.