I was once playing around with a 3-point ranking system. Think thumbs-down, thumbs-up, and double-thumbs-up. The thumbs up and down would basically function as expected, while the latter would be weighted heavier for the recommendation algorithm. Basically a `recommend me more of this, this is high quality content` action.
There is a general problem with a 5 or 10 star voting system, consider a [malicious] user who only gives a 1 or 10 star vote, thus ending up with more voice than one that votes in the range of 4-6 which would be what the majority of the content deserve. Therein lies another problem too, while the scale would imply 5.5 to be average [out of 1-10 with no 0 option], most people tend to consider 7-7.5 to be average instead, there's a very natural bias on the scale.
This idea isn't actually uncommon however, as platforms tend to work with a thumbs-up, thumbs-down, and a `favorite` action of sorts. Some platforms tend to respect favorites in recommendations and some don't. I have found that YouTube doesn't care all that much about my... let alone favorites, it doesn't even care about my votes. TikTok however did this well, I had downloaded it one day and at the end of the day my feed consisted of neat programming tricks and lessons on color theory. Which kind of revealed something my own prejudice too, as I had expected TikTok to show me the worst content and it was the platform that respected my choice the most. That said these things change a lot so it wouldn't surprise me if the same test shows the opposite results a year from now on.
What you're describing is fully observed in education. The scale of your evaluation has to have an odd total of numbers and a limited number of choices. 1-2-3, for example. That means that each digit sends a strong signal. 1-2-3-4 means the 3 is the non-controversial choice. Average, so to speak, and you don't want average in your evaluation. You need to adequately grade stuff here, and giving most things an average grade is a weak signal that prevents you from differentiating.
When you grade by competency (not by knowledge), you also assign a written description for each grade. That helps a lot. I think those platforms are keenly aware of those facts I just described, and are trying to boil them down to simple actions for users, that impart large signal, and that respect the cultural norms of evaluation. That's why Letterboxd has a 5-star with half-stars rating system, but also has a like button.
There is a general problem with a 5 or 10 star voting system, consider a [malicious] user who only gives a 1 or 10 star vote, thus ending up with more voice than one that votes in the range of 4-6 which would be what the majority of the content deserve. Therein lies another problem too, while the scale would imply 5.5 to be average [out of 1-10 with no 0 option], most people tend to consider 7-7.5 to be average instead, there's a very natural bias on the scale.
This idea isn't actually uncommon however, as platforms tend to work with a thumbs-up, thumbs-down, and a `favorite` action of sorts. Some platforms tend to respect favorites in recommendations and some don't. I have found that YouTube doesn't care all that much about my... let alone favorites, it doesn't even care about my votes. TikTok however did this well, I had downloaded it one day and at the end of the day my feed consisted of neat programming tricks and lessons on color theory. Which kind of revealed something my own prejudice too, as I had expected TikTok to show me the worst content and it was the platform that respected my choice the most. That said these things change a lot so it wouldn't surprise me if the same test shows the opposite results a year from now on.