Ingest times correlate linearly with file sizes because it needs to compute the blobref (which is a configurable hash) for all the blobs (chunks as you call them). Splitting in blobs/chunks is necessary because a stated goal of the project is to have snapshots by default when modifications are done. Doing snapshots/versioning without chunking would be very inefficient.
But perkeep's focus, as I understand it, is more on managing an unstructured collection of immutable things (e.g. photo archive), rather than being a tool to back up your mutable filesystem. So I'm not sure they made a good design decision to chunk the sh*t out of my files, which really kills the performance on large files and especially on spinning disks.