Hacker News new | past | comments | ask | show | jobs | submit login
Vimeo Disabled My Account for Submitting HTML (michaelscepaniak.com)
85 points by hispanic on July 23, 2019 | hide | past | favorite | 61 comments



I work at Vimeo. We use automated detection to flag suspicious accounts. This was a clear false-positive, and the account has been unsuspended.

We don't convert urls to hyperlinks in video descriptions when the user has signed up recently, because hyperlinked urls have been used by spammers to trick visitors into visiting fraudulent sites.

This is clearly legitimate behaviour in context, but a brand-new user posting identical descriptions and urls on multiple videos is behaviour commonly seen in spam accounts, and we tend to err on the side of caution.

Edit: clarified circumstances under which urls are converted


I can confirm that my account is now re-enabled! Thanks.

> We don't allow hyperlinks in video descriptions when the user has signed up recently...

So, I should be able to add hyperlinks in my descriptions at some point down the line? But, how will I know when that is? The fact that me trying resulted in my account being disabled makes me really scared to try it again - ever.


Responding with an alt because I throttle my HN usage...

I should clarify: we allow you to include any url you want, but we don't add anchor tags to the outputted HTML for new users' videos (unless they're paying customers).


Ahhh - so I wouldn't have even tried to submit the markup in the first place. Got it. Thanks.


As a person who posts too frequently on social media, I am interested in your self-throttling solution. Is there a browser extension or something you use?


HN has throttling logic built in, go to your user profile and configure noprocrast, maxvisit, minaway


Neato, thanks!


It's an option in the Hacker News account settings.


In case anyone's wondering, it's explained in the FAQ (https://news.ycombinator.com/newsfaq.html).

We got a request the other day for an option to disable it on weekends.


It'd be cool if there were an option to disable it for an hour after someone replied to one of your comments - the throttling is great for when I'm just lurking, but bad for when I'm participating.


Interesting that this is a feature but blocks quotes on mobile isn't :)


This is great and fast response here.

Does Vimeo has a way for human employee to proactively (not as a result of social media shaming) look at- and unblock customer account in case it was a result of false positive?

Ban-first, handle-later is not exactly customer friendly policy.


> Does Vimeo has a way for human employee to proactively (not as a result of social media shaming) look at- and unblock customer account in case it was a result of false positive?

No, but we have a great support team that responds to all customer enquiries (though paying customers get priority).

> Ban-first, handle-later is not exactly customer friendly policy.

It is when the content being banned might harm customers.


> It is when the content being banned might harm customers.

I think this is the right way to approach it because it seems, and this is entirely my unscientific view, a lot of those scammer websites target uploading to Vimeo, Dailymotion, and so on (basically, the websites that aren't as big as YouTube) because they think that being a smaller outfit, the company is less likely to flag and remove the viedo.

Then those scam videos turn up relatively high in Google Searches for full movies, full episodes, etc.

I notice this especially for non-English, non-Hollywood content. Lots of French, Chinese, Japanese, Korean stuff, all with dubious links.


I think it's not.

If the scoring system detects some suspicious activity, it's the current operation which should be halted (in this context, the video should have been retired), but not the user account.

If after a human review, the account present evident violation of TOS, then, yes, ban the user.

edit: bad grammar.


Automatically deactivating links or removing URLs would be just as effective at avoiding harm, and a message explaining that had happened would be much more friendly than terminating accounts without warning.


The account wasn't terminated - it was (temporarily) suspended.

> Automatically deactivating links or removing URLs would be just as effective at avoiding harm

Generally not - in the (much more common) case where the person being suspended is a spammer, the videos attached to the descriptions in question generally are _also_ spammy - that is, they generally contain some instruction to the user to visit a given URL, or "CLICK THE LINK BELOW".


The notice screen upon login that told me my account was disabled directed me to submit a contact form for human review, which I did. However, the time estimate for replies to submissions from Basic accounts is 3 or more days.


I actually received a response from Vimeo support (later in the evening, yesterday) to the request I submitted. It echoed what Matt (muglug) wrote. I don't know if my request just happened to make its way through the Vimeo queue quickly or if it was expedited because of this discussion. Regardless, I'm pleased with the response.


> This is great and fast response here.

Do you mean the response in general or the post here? Because let's not confuse social media management with fast support response times, we can't all kick up a stink on social media. It's also not that great, it contained very little information not in TFA.


Would they have been unsuspended even if this didn't hit HN?


Yes, it would just have taken a little longer (maybe a day).


[flagged]


"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html


You wouldn't be comfortable using their service because you might need to wait a day during support questions?


if you can diagnose the issue after submission, why don't you simply not allow it to be saved in the field to begin with, you know, like normal devs of every platform since 1995. That's 24 years of precedence.


If you don't trust HTML from new users, how about disabling HTML for new users, instead disabling their videos? Why crush people and scare newbies off when you can make a proportionate response?


That’s…what HTML escaping is for…?


What do you mean "HTML escaping"? If we escape HTML in the conventional sense, then it becomes literal text; you see <a href="whatever">...</a> instead of a clickable link because < is escaped as &lt; and so on.

User-submitted HTML content can be scrubbed against malicious tags such as scripts, cross-site exploits, or hacks involving broken syntax that disrupts the surrounding page into which it is embedded.

But there is no simple way to allow a brand new user to embed a working URL, such that malicious URLs are prevented; basically you need a giant blacklist and whitelist of domains.

A better UI here would be not to ban the account but just not allow the user to save the content containing HTML. The message should say "Sorry, brand new accounts are not allowed to post HTML content". Many new accounts posting HTML content are going to be legitimate.


We don't allow any HTML tags in video descriptions (we pass them through a simple tag stripping filter).

But we do automatically add anchor tags to urls for most videos (just not new user's videos).

I agree that it's a bit of a UI issue, but communicating to a new nonpaying user that their text-only urls will one-day turn into clickable urls is particularly simple.

Looking at the public videos, and reading the article, I think an automatic system detected a new user both adding urls to two separate videos, and the same user modifying those links slightly. That's sufficiently unusual behaviour that it met some threshold for an automated account ban.


Okay, I'm not saying it's completely awful to try to detect bad behavior in this way, but you keep offering up explanations for why it happened when the real issue here is the lack of transparency into a rule that can get you banned, when it should be trivial to instead offer users useful feedback. As the author said, this behavior has a chilling effect, or at least an extremely frustrating one. The issue isn't "why the user's actions resulted in a ban", it's the fact that the entire situation should have been avoidable.


They can't share the detailed rules since that will give abusers more information to take advantage of. You assume everyone is a legitimate user, but non trivial fraction of online accounts are for abuse. Since the perfect precision on abuse detection is impossible and insufficient abuse detection often means a death for the service (spam, legal, PR), this is impossible to avoid. The only question is the rate of false positive flagging. Given that this is purely anecdotal, it doesn't give me any way to assess whether Vimeo's false positive is egregiously bad or not.


I don't see how an error message stating, "html markup is not allowed in your post" will in anyway facilitate more abuse than banning when the abusive bad actors will simply take the ban as a signal of the same, create another account, and then avoid the activity. There's a balance to be struck between usability and abuse prevention, and currently Vimeo has got it wrong.


What if every link was encoded with something like a tinyUrl and then whenever the link was clicked, during the translation process, check if a link is blacklisted. This also has the added benefit of allowing statistics to be gathered on common links across multiple accounts.


Approximately 100% of spam URLs haven't been blacklisted yet, because they are being generated by high-speed automated processes.


Perhaps, but the DNS translations to the underlying servers hosting those spamURLs as well as the host header can't possibly be that varied. At the end of the day, you can have infinite URLs but you can't have infinite servers.


I wonder to what extent it would be effective/useful it to cross-reference SMTP blacklist databases. As in, resolve a URL to an IP address, and then see if that IP block is "spammy" in the e-mail sense.


What do you mean? Escaping doesn't help with links to spam or fraudulent sites.


So was this a UI-level filter that failed and let un-sanitized html hit the server, which triggered the auto-block?


The employee has responded to all other questions that don't touch on the specifics of their fraud detection algo.

After those responses it sounds like they have a behavioral threshold that looks something like: "x number of videos that include an external link or html that have been created or updated within x period of time for a specific account or ip address", where the limits are probably much tighter on the free accounts.


Something I've found to be true in both game design and software UX is that generally speaking, you want users to feel safe trying new things. This helps sidestep some problems that tech-illiterate users have where they become scared to guess where anything is and have to be shown how to do everything. In game design, it makes it easier to teach users new mechanics.

It's also a prerequisite to having more intuitive, predictable interface. You can train people to just try clicking things or looking in their settings to see if something will work, but only if they're not worried that they're about to break the application. In a game, people will experiment and learn new mechanics, but not if they're worried that doing something wrong will come with severe consequences.

Reading this, I'm suddenly struck that policies and terms are subject to the same rules.

Of course I know that the point of having vague policies around stuff like this is very explicitly to not allow bad actors to feel safe probing your system. The point is to avoid the above scenario. But it occurs to me that this is a tradeoff. In order to keep bad actors from experimenting, you are also going to keep good actors from experimenting. People will be very careful not to go off the beaten path with your service, even to the point where they'll avoid building creative things.

They'll contact support more often instead of just trying things, and they'll be less creative with how they use your platform. Back when Youtube still had annotations working, I saw people building weird overlays and choose-your-own-adventure videos. Seeing that I could get banned this quickly for even accidentally stepping out of line in one place, I would never try something like that on Vimeo.

This doesn't mean that Vimeo's policies are bad. Vimeo may not even want people to experiment with their service. But the choice to immediately ban users, rather than popping up a notification or error message -- it's not just a policy decision, it's a UX/design decision, and it changes the way that ordinary people will interact with the company.


It's important to quickly abandon companies that have automated account closure systems.

The message has to be conveyed to these companies that automated account closures result in loss of business.

Automated suspicious activity systems should be flagging the issue for human attention within that company, where presumably someone will make a considered decision about the issue and communicate effectively with the account holder about what the issue is to understand and resolve it.

Auto account closures are completely unacceptable where someone is paying for that service.


First, he mentioned explicitly that he did not pay for their service, he was a free user.

Second, every system open to the public that has even a moderate amount of users has automated abuse detection and mitigation techniques in place. It's flat out irresponsible to not have such systems in place, ESPECIALLY if you have paying customers s the opportunity for abuse are huge.


It feels like this comes from a point of view of privilege. You would rather thousands of people work for a mind-numbing call-center-like task just so there are no false positives, and the company expend hundreds of thousands of dollars on this, rather than experience a temporary problem that can be resolved with one or two emails.


Reminds me of that startup that recently got knocked off the web by digital oceans automated system.

Scary to think that for us average Joes, our accounts can be nuked with no recourse unless you have a large Twitter following to get someone important to notice


Scary to think that for us average Joes, our accounts can be nuked with no recourse unless you have a large Twitter following to get someone important to notice

Wasn't Google the original author of this kind of customer service? Don't be Evil, but do go ahead and be Kafkaesque.


You navigate these types of controls every day without realizing it. The web is now a nuclear testing range and these insanely aggressive countermeasures are necessary to operate.

My assumption here is that they have a back-end system auto-blocking injected html that makes it to the server, and some ui-level filter failed. I'd love to hear the follow up and would expect this answer.


I'm surprised customer service just doesn't exist. Google can get away with it but vimeo with 15% market share can't afford to throw in the towel on support already.


> Google can get away with it ...

It's interesting. They definitely feel they can "get away with it".

However they're likely building up a sizeable group of people who they've negatively affected and will return the favour in spades when opportunity presents.


How much customer service should a $0/month subscription buy?


Put another way videos uploaded by the average user earned vimeo 'x' dollars over the last month.

Without any revenue they close up.

How many customers can they afford to lose? 5%,10%, 90%

If they don't support the free tier users they will lose them. At one point vimeo was making a serious run for global marketshare.. now it looks like they are in a different market..


> At one point vimeo was making a serious run for global marketshare

Vimeo hasn't been going for global marketshare of video watching in well over a decade - it ceded that competition to YouTube very early on by policing content more actively (Viacom sued YouTube in 2007 for $1B because of its lax enforcement of rules).

Vimeo chose a different direction, launching a paid account in 2008: https://vimeo.com/1977937


At least enough to unsuspend the account after they suspend it by mistake.


I'm thinking about making a website that lists all dark patterns and user experience fudges of a given service/app/website so that we users/customers can better choose where our time/money is going.

Users would be able to complain about: Phantom credit card charges, captcha walls, requiring cellphone verification, random account termination, etc.

What do you think about it? I think it's really susceptible to astroturfing, but maybe there's a way to fix that.


If I grabbed 50 friends and we all posted identical and untrue complaints on your website, stating that you randomly deleted our accounts, would you delete our accounts?


Not those accounts specifically.

Maybe I would have to implement a system that deletes accounts automatically when comments are identical. It's unlikely though, because I prefer to err on the side of deregulation and would not like to delete accounts, even if I'm 90% sure that those accounts are bots. I presume innocence until proven guilty (where guilty: breaking the site rules).

It's impossible for me to know what complaints are true and which aren't for every complaint in the site, so I wouldn't delete any of them based solely on their truth value. I wouldn't even delete those that I know are false.

It's all imagination though, I don't know if I'm ever going to create the site. I just wanted to gauge interest and maybe receive some criticism.


I used to use Vimeo for hosting product videos that I embedded onto my website. It was better than YouTube because it didn't show "recommended videos" (often with competitors' products or useless videos) after it finished playing. But then Vimeo started doing it too for embedded videos, so I switched to YouTube since users are more familiar with its UI.


You can pass rel=0 to YouTube iframes to only recommend additional videos from the same channel. https://developers.google.com/youtube/player_parameters#rel


For now, at least. "rel=0" used to mean "don't show related videos at all"; now it means "only show related videos from the same uploader", and I wouldn't be shocked if YouTube limited its functionality even further in the future.


At least they tell you what's going on, some other websites will just throw you a "login failed" and that's it.


>indicating that it may be in violation of our Acceptable User Policy.

Well, was it? The author makes no claim that he didn't violate the policy.


I didn't find anything in any of their policies that submitting markup for inclusion in the video description is a violation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: