Hacker News new | past | comments | ask | show | jobs | submit login
After a livestreamed suicide, TikTok waited to call police (theintercept.com)
54 points by crivabene on Feb 7, 2020 | hide | past | favorite | 36 comments



This (again) highlights the need for tech companies to take responsibility for what happens on their platform. And it probably won't happen without increased regulation.

Imagine you build a real-world football stadium. Then, you open the doors up to the public for free with minimal restrictions or rules for how the space is used.

You brand the stadium as a "platform" and you invite the public to treat it as such. 6 months later you have people doing drugs on one side of the stadium, holding a white supremacy rally at the other end of the stadium, and then a group of people in the bleachers playing russian roulette with the occasional casualty.

The police eventually track down the owner of the stadium. The owner says "Yes, I own the stadium, it's my property, but sorry Mr. Police, the stadium is a Platform for free expression. I have no control what the public does in there."

--

Introducing new regulations is usually an unpopular suggestion. It raises the barrier of entry for competitors. It limits the market to players who can afford to comply.

But at a certain point, IMO we should stop giving tech companies a free pass just because they're a "platform" that's "too large to moderate".


We do have public parks in New York and other cities that are privately owned, and people walk through them.. homeless people sleep in them.. and, yes, people sometimes do drugs or other illicit activities. The police patrol them like any other park.

Here's an example: Zuccotti Park in NYC where the Occupy Wall Street protests were held: https://en.wikipedia.org/wiki/Zuccotti_Park They're called POPS: https://en.wikipedia.org/wiki/Privately_owned_public_space And here's a list of them in NYC: https://en.wikipedia.org/wiki/List_of_privately_owned_public...

Are you now saying that the owners of these parks are legally liable for the actions people take in them? What about city parks: is the city government liable?

What do you base this legal theory on? Why haven't these park owners been arrested if this were true?

Do you have any legal training?


Comparing a physical space to a digital one is absurd.

Regulation will be used suppress speech and to further serve their paying customers: advertisers, politicians, special interest groups, corporations.

These platforms are already on their way to becoming sterile and for huge money incumbents, much like how cable tv is today.

If they have to be held accountable for the things their users do, they will simply wall it off even more, ban anything that seems slightly outside of the Overton window/status quo, and there will be absolutely no recourse for small content creators who are unfairly or wrongly banned.


> Comparing a physical space to a digital one is absurd.

I agree, it's an absurd idea because we tend to think of digital life and physical life as totally separate legal realms. I think over the next 10 years as digital and physical life become even more intertwined, this will start to change.

> Regulation will be used suppress speech and to further serve their paying customers: advertisers, politicians, special interest groups, corporations.

I also agree. But you have to balance that with the public benefit of moderated platforms vs unmoderated platform. It's a really tricky balance to get right, which is why I think a lot of people don't like the idea of more regulation.

It's a tricky problem. There might be a better solution. But I don't think the solution is to keep things working the way they are.


I'm an idealist who believes that all internet platforms should be decentralized to the greatest extent possible, with open protocols and open source desktop software from diverse origins.

To be able to do what you suggest, to have control over what the public does in a platform, the platform would need to be as centralized as possible, and that's why I disagree with you. I don't want our current software ecosystem to become even more centralized than what it's already become.


Friend-to-friend networks[1] bring the self-moderation most real-world networks rely on to the digital, without imposing a centralized power structure.

No surveillance needed: if you see one of your peers (/"friend") committing a crime, it's on you to report it. On the other hand, if they are breaking a law nobody cares about (e.g. buying weed), nothing is going to happen to them. That's how society works already.

RetroShare, though it's almost dead, was a wonderful example of this. File-sharing is anonymous, but limited by a certain number of hops through your friends and the friends of their friends. As such, for a regular user, it had all the warez any extended group of friend would like to share between themselves, but none of the child-pornography or snuff films you could expect to end up on in "global" anonymous P2P networks. And it could even be free of warez if your friends abided by intellectual property laws.

[1]: https://en.wikipedia.org/wiki/Friend-to-friend


This falls down a bit in the GPs example though, doesn't it? Those in the stadium committing the "crimes" of roulette, drugs, or the rally would say that whatever they may be doing is "breaking a law nobody cares about."


That doesn't absolve anyone of responsibility in regard to the law. Whoever was in contact with them and suspected them could report them to the police. In most jurisdictions, witnessing a crime and not reporting it is also punishable. It simply prevents the over-zealous censorship that happens when you put all responsibility on a single actor (Tumblr, YouTube, ...).


>In most jurisdictions, witnessing a crime and not reporting it is also punishable

Usually such laws only apply to specific crimes.


> "This (again) highlights the need for tech companies to take responsibility for what happens on their platform."

Does it? They took the video down and revised their processes to respond to such incidents. What other responsibilities do they have?


Call the police immediately.


But the person was already dead.


Highly likely but are you certain of that?

Are you certain a crime has not taken place?

A death has happened. PR strategy would be the furthest thing from my mind - but I'm a human, not a corporate entity.


Plus it would be reasonable to assume at least one of the many live watchers already did that...


I don't see how that would be useful. Police are local and random internet streams are not usually traceable to a location, which is all they would be interested in.


The timeline in the story seems to confirm that assumption.


Really? I mean, either a few drug dealers slip through the cracks, you fill the nice fun stadium with creepy surveillance cameras to cover your own ass, or the stadium simply doesn't exist. I'm all for options (a) and (c), but option (b) is the worst.


> You brand the stadium as a "platform" and you invite the public to treat it as such. 6 months later you have people doing drugs on one side of the stadium, holding a white supremacy rally at the other end of the stadium, and then a group of people in the bleachers playing russian roulette with the occasional casualty.

So what? People are responsible for their own individual actions. None of what you described is hurting anyone else. It may not be a pleasant place, but that's okay because it's private property. Now consider that in the digital world there's no possibility for physical harm and the example becomes entirely pointless in any case.


We're talking about a suicide, so there is possibility for physical harm.

(I'm not at all saying TikTok should be responsible for this person's suicide. Only that moderation of online platforms can have real physical consequences. When moderators respond appropriately to posts involving suicide, stalking, and child pornography, they can help prevent abuse in the physical world.)


Doesn't take much imagination to switch up grand-parent's contrived examples for your own definition of actually threatening deeds to get their point, does it? Would "operating a slave-manned factory" or "training volunteers in crafting concealed remotely-detonated explosives" fit the bill?


You're putting words in my mouth. Your examples aren't consistent, either:

> operating a slave-manned factory

This is wrong. Slavery violates the personal liberty of the enslaved

> training volunteers in crafting concealed remotely-detonated explosives

I don't see an issue with this. The information is already available online.


This article doesn't mention that there is good evidence that publicity surrounding high profile suicides often leads others who are at risk to attempt suicide too. Thats why (along with respect for families) suicides are often described more obliquely as 'x has died' when the deaths are reported.

Ensuring that these types of incidents have as low a profile in the press as possible may well have saved lives.


This article sounds skewed against TikTok. Was the police never contacted by anyone about the suicide? Maybe the police were already contacted? This is a big question that the article leaves out that makes it sound like it's only TikTok who is responsible for calling the police.


Bystanders have a responsibility to report, at least where I am. Even if they are not the only ones responsible, they are still responsible.


Bystander has a different meaning online, though. Is a viewer located halfway around the world still a bystander when they are not governed by the law surrounding an event and may not have any way to influence said event? Is this same person culpable?


Yes, it'd be the same as if you were on the phone with the person.


That doesn’t make sense. The person on the phone you most likely know where they live. If you are on an 800 number how do you contact the correct authorities? A public account on tiktok can’t tell you who to call... unless it has a video of ghostbusters, but that won’t help.


Please feel free to email them for the answers. They may have this information already.


The principle failing here seems to be a lack of a prepared plan of action for such events, and confusion within the TikTok offices as to how specifically to respond.

When you are dealing with The Public and public activities, at scale and volume, you've got to be prepared for such actions.

This means not only detection (which functioned poorly), but response.

- Contact first responders and emergency services.

- Disable harmful, inappropriate, or disturbing content.

- A set of prepared communications for specific circumstances (and adapatable to others) such that you're not scrambling to put together language under a crisis situation.

It's worth noting that with networks such as Facebook and Google (the late unlamented G+, YouTube) having literally billions of users, simple actuarial statistics mean that tens of thousands of accounts are likely to represent deaths on any given day. Contingency planning simply has to account for more than puppies and rainbows.

During my own time on G+, at least two of the thousand of so people I followed closely, both within a relatively close circle, took their own lives. One was a complete shock, though he'd been acting somewhat oddly (purging activity and accounts, then restoring them) over preceding months. The other had made threats over the course of a year or so, including a previous instance in which I'd tried frantically to find a way to alert local authorities thousands of km away. The first time there was a intervention. The second, not. Thankfully, in both cases the evidence wasn't a stream, but silence.

I followed up as best I could with several Googlers at the time. There's frustratingly little which can be done, though some capabilities were offered.

TikTok's response here strikes me more as simply poor planning, and the Intercept's criticisms perhaps unduly harsh. Though yes, forsight, planning, drilling of those plans, and an ever-so-slightly less self-serving slant to the response, would be preferred.

Requiescat in pace, Dieter and Dawn.


Sounds like they did the right thing. The police couldn't have helped by the time the company was aware of the post. Certainly at least one of his fans had already called them anyway.


I just heard today that tik tok is libe moderating its content, perhaps they weren’t doing tgat back then?


As noted in the article, they were:

The team of moderators, referred to internally with the anodyne moniker of “technical team,” is the only unit that works alone, apart from other TikTok operators and staff. This team’s responsibility includes pulling any content that violates TikTok’s terms of use or is offensive to the app’s user community. Live broadcasts, however, are monitored by an even more elite team operating in China, where ByteDance has its global headquarters. The team in China was responsible for monitoring the app and discovering João’s suicide as it was livestreamed. Yet no one saw it.

https://theintercept.com/2020/02/06/tiktok-suicide-brazil/


[flagged]


Your original comment was probably downvoted because it took the thread on a flamebait tangent, which breaks the site guidelines.

These parts:

> care to reason the downvotes?

> Or just CCP-patrolling?

... break the site guidelines even more. We ban accounts that do that, so would you please read https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here? We'd appreciate it.


Some people downvote comments which question downvotes.


[]


Spoiler warning, please!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: