How does it plan on dealing with spam/bots? In a system where nobody has a verified identity, nothing can be removed without destroying most of the system, and consensus decides what's visible the real users seem to be at quite the disadvantage.
Spam/bots are a common, tough, problem for existing imageboards that can exert central control. I don't think light/dark comes close to solving this type of issue, it assumes the majority of interactions are benevolent.
HashCash whose difficulty is chosen collectively based on the load. Since threads/boards are represented with Merkle-CRDTs, there's eventual consistency, so participants can reach some sort of consensus based on the message rate. This limits the posting rate and gives some time for moderation. Threads eventually die off, so it's not a big problem there's shadowed spam in them. And regarding boards, people will simply not participate in making swarms around spammy threads, so it shouldn't be a big deal.
Spam/bots are a common, tough, problem for existing imageboards that can exert central control. I don't think light/dark comes close to solving this type of issue, it assumes the majority of interactions are benevolent.