I'm holding out for a 4-way contest of quant, horse, monkey, and algae. I won't be betting on the winner, either.
That's perhaps the most annoying thing about "machine learning" in general and heuristic algorithms in particular.
These researchers hack together some rules to put together an algorithm, and only after that does the real work start: to come up with a metaphor to explain their model based on some clever story on why they decided to generate random sample points or filter training points.
In the end, the whole field starts to look like a bullshitter's ball, where everyone tries to one-up each other with the biggest bullshit metaphor to sell their an algorithm which is actually only a very minor tweak on an established age-old concept.
"We use particle swarms to generate new solutions, which are then genetically modified and subjected to a darwinian-inspired differential-evolution filter, who are then analised based on the behavior manifested by wolfpacks to search and hunt for their prey, and whose sub-optimal solutions are eliminated by following nature's resource-exhaustion megakill phenomena."
Bullshit all around, but it sells.