Building the set of used files or objects (which is what mark does in a mark/sweep).
Sometimes it's too expensive to mark in place, even if it's a bit that you need to write to disk and keeping a set of references may be prohibitive (or the structure holding the references is mostly/effectively immutable).
If it's all memory and mutable it doesn't (normally) really matter, but when it's not, you ideally would have some mechanism to move the code to where the data is, rather than stream the data to where the compute is (it is really wasteful for large scale data processing).
In any case, you would not be moving/scanning the files themselves, but the metadata is what you want to read for the mark phase.
The article if I understood correctly implies that the files and the metadata of the files (Kafka queues and so on) are separate, so presumably, the metadata is much much smaller than the data itself, but still potentially large.
For example if you had a large scale content addressed store (think a massive version of git's blob storage), you typically add to something like that and keep a few mutable root references to start your GC from to seed a mark/sweep.
Following the git example, the roots would be the branches, tags and reflogs, and the metadata you scan the transitive closur of the trees that are reachable from those (simplifying a bit) but not the file blobs themselves.
I use git as an example because a a CAS lends itself very well to large scale distributed systems because you can reason about it as an immutable data structure, but you can still change it effectively with sane semantics.
Sometimes it's too expensive to mark in place, even if it's a bit that you need to write to disk and keeping a set of references may be prohibitive (or the structure holding the references is mostly/effectively immutable).
If it's all memory and mutable it doesn't (normally) really matter, but when it's not, you ideally would have some mechanism to move the code to where the data is, rather than stream the data to where the compute is (it is really wasteful for large scale data processing).
In any case, you would not be moving/scanning the files themselves, but the metadata is what you want to read for the mark phase.
The article if I understood correctly implies that the files and the metadata of the files (Kafka queues and so on) are separate, so presumably, the metadata is much much smaller than the data itself, but still potentially large.
For example if you had a large scale content addressed store (think a massive version of git's blob storage), you typically add to something like that and keep a few mutable root references to start your GC from to seed a mark/sweep.
Following the git example, the roots would be the branches, tags and reflogs, and the metadata you scan the transitive closur of the trees that are reachable from those (simplifying a bit) but not the file blobs themselves.
I use git as an example because a a CAS lends itself very well to large scale distributed systems because you can reason about it as an immutable data structure, but you can still change it effectively with sane semantics.