There is an even simpler idea at the core of many modern backup programs (eg. bup, attic, borg)! Possibly pioneered in `gzip --rsyncable`, not sure.
Each side slices files into chunks at points where rolling hash % N == 0, giving chunks of variable size averaging N.
These points usually self-synchronize after insertions and deletions.
This allows not just comparing 2 files for common parts, but also deduplicated storage of ANY number of files as lists of chunk hashes, pointing to just one copy of each chunk!