We don't convert urls to hyperlinks in video descriptions when the user has signed up recently, because hyperlinked urls have been used by spammers to trick visitors into visiting fraudulent sites.
This is clearly legitimate behaviour in context, but a brand-new user posting identical descriptions and urls on multiple videos is behaviour commonly seen in spam accounts, and we tend to err on the side of caution.
Edit: clarified circumstances under which urls are converted
> We don't allow hyperlinks in video descriptions when the user has signed up recently...
So, I should be able to add hyperlinks in my descriptions at some point down the line? But, how will I know when that is? The fact that me trying resulted in my account being disabled makes me really scared to try it again - ever.
I should clarify: we allow you to include any url you want, but we don't add anchor tags to the outputted HTML for new users' videos (unless they're paying customers).
We got a request the other day for an option to disable it on weekends.
Does Vimeo has a way for human employee to proactively (not as a result of social media shaming) look at- and unblock customer account in case it was a result of false positive?
Ban-first, handle-later is not exactly customer friendly policy.
No, but we have a great support team that responds to all customer enquiries (though paying customers get priority).
> Ban-first, handle-later is not exactly customer friendly policy.
It is when the content being banned might harm customers.
I think this is the right way to approach it because it seems, and this is entirely my unscientific view, a lot of those scammer websites target uploading to Vimeo, Dailymotion, and so on (basically, the websites that aren't as big as YouTube) because they think that being a smaller outfit, the company is less likely to flag and remove the viedo.
Then those scam videos turn up relatively high in Google Searches for full movies, full episodes, etc.
I notice this especially for non-English, non-Hollywood content. Lots of French, Chinese, Japanese, Korean stuff, all with dubious links.
If the scoring system detects some suspicious activity, it's the current operation which should be halted (in this context, the video should have been retired), but not the user account.
If after a human review, the account present evident violation of TOS, then, yes, ban the user.
edit: bad grammar.
> Automatically deactivating links or removing URLs would be just as effective at avoiding harm
Generally not - in the (much more common) case where the person being suspended is a spammer, the videos attached to the descriptions in question generally are _also_ spammy - that is, they generally contain some instruction to the user to visit a given URL, or "CLICK THE LINK BELOW".
Do you mean the response in general or the post here? Because let's not confuse social media management with fast support response times, we can't all kick up a stink on social media. It's also not that great, it contained very little information not in TFA.
User-submitted HTML content can be scrubbed against malicious tags such as scripts, cross-site exploits, or hacks involving broken syntax that disrupts the surrounding page into which it is embedded.
But there is no simple way to allow a brand new user to embed a working URL, such that malicious URLs are prevented; basically you need a giant blacklist and whitelist of domains.
A better UI here would be not to ban the account but just not allow the user to save the content containing HTML. The message should say "Sorry, brand new accounts are not allowed to post HTML content". Many new accounts posting HTML content are going to be legitimate.
But we do automatically add anchor tags to urls for most videos (just not new user's videos).
I agree that it's a bit of a UI issue, but communicating to a new nonpaying user that their text-only urls will one-day turn into clickable urls is particularly simple.
Looking at the public videos, and reading the article, I think an automatic system detected a new user both adding urls to two separate videos, and the same user modifying those links slightly. That's sufficiently unusual behaviour that it met some threshold for an automated account ban.
After those responses it sounds like they have a behavioral threshold that looks something like: "x number of videos that include an external link or html that have been created or updated within x period of time for a specific account or ip address", where the limits are probably much tighter on the free accounts.
It's also a prerequisite to having more intuitive, predictable interface. You can train people to just try clicking things or looking in their settings to see if something will work, but only if they're not worried that they're about to break the application. In a game, people will experiment and learn new mechanics, but not if they're worried that doing something wrong will come with severe consequences.
Reading this, I'm suddenly struck that policies and terms are subject to the same rules.
Of course I know that the point of having vague policies around stuff like this is very explicitly to not allow bad actors to feel safe probing your system. The point is to avoid the above scenario. But it occurs to me that this is a tradeoff. In order to keep bad actors from experimenting, you are also going to keep good actors from experimenting. People will be very careful not to go off the beaten path with your service, even to the point where they'll avoid building creative things.
They'll contact support more often instead of just trying things, and they'll be less creative with how they use your platform. Back when Youtube still had annotations working, I saw people building weird overlays and choose-your-own-adventure videos. Seeing that I could get banned this quickly for even accidentally stepping out of line in one place, I would never try something like that on Vimeo.
This doesn't mean that Vimeo's policies are bad. Vimeo may not even want people to experiment with their service. But the choice to immediately ban users, rather than popping up a notification or error message -- it's not just a policy decision, it's a UX/design decision, and it changes the way that ordinary people will interact with the company.
The message has to be conveyed to these companies that automated account closures result in loss of business.
Automated suspicious activity systems should be flagging the issue for human attention within that company, where presumably someone will make a considered decision about the issue and communicate effectively with the account holder about what the issue is to understand and resolve it.
Auto account closures are completely unacceptable where someone is paying for that service.
Second, every system open to the public that has even a moderate amount of users has automated abuse detection and mitigation techniques in place. It's flat out irresponsible to not have such systems in place, ESPECIALLY if you have paying customers s the opportunity for abuse are huge.
Scary to think that for us average Joes, our accounts can be nuked with no recourse unless you have a large Twitter following to get someone important to notice
Wasn't Google the original author of this kind of customer service? Don't be Evil, but do go ahead and be Kafkaesque.
My assumption here is that they have a back-end system auto-blocking injected html that makes it to the server, and some ui-level filter failed. I'd love to hear the follow up and would expect this answer.
It's interesting. They definitely feel they can "get away with it".
However they're likely building up a sizeable group of people who they've negatively affected and will return the favour in spades when opportunity presents.
Without any revenue they close up.
How many customers can they afford to lose? 5%,10%, 90%
If they don't support the free tier users they will lose them. At one point vimeo was making a serious run for global marketshare.. now it looks like they are in a different market..
Vimeo hasn't been going for global marketshare of video watching in well over a decade - it ceded that competition to YouTube very early on by policing content more actively (Viacom sued YouTube in 2007 for $1B because of its lax enforcement of rules).
Vimeo chose a different direction, launching a paid account in 2008: https://vimeo.com/1977937
Users would be able to complain about: Phantom credit card charges, captcha walls, requiring cellphone verification, random account termination, etc.
What do you think about it? I think it's really susceptible to astroturfing, but maybe there's a way to fix that.
Maybe I would have to implement a system that deletes accounts automatically when comments are identical. It's unlikely though, because I prefer to err on the side of deregulation and would not like to delete accounts, even if I'm 90% sure that those accounts are bots. I presume innocence until proven guilty (where guilty: breaking the site rules).
It's impossible for me to know what complaints are true and which aren't for every complaint in the site, so I wouldn't delete any of them based solely on their truth value. I wouldn't even delete those that I know are false.
It's all imagination though, I don't know if I'm ever going to create the site. I just wanted to gauge interest and maybe receive some criticism.
Well, was it? The author makes no claim that he didn't violate the policy.