> The issue is not just about making graphic content disappear. Platforms need to better recognize when content is right for some and not for others, when finding what you searched for is not the same as being invited to see more, and when what‘s good for the individual may not be good for the public as a whole.
> And these algorithms are optimized to serve the individual wants of individual users; it is much more difficult to optimize them for the collective benefit.
This seems to suggest that the psychological effects of recommenders and "engagement maximizers" are not problematic per se - but today are simply not used with the right objectives in mind.
I find this view problematic, because "what's good for the public" is so vaguely defined, especially if you divorce it from "what's good for the individual". In the most extreme cases, this could even justify actively driving people into depression or self-harm if you determined that they would otherwise channel their pain into political protests.
If we're looking for a metric, how about keeping it at an individual level but trying to maximize the long-term wellbeing?
> when what‘s good for the individual may not be good for the public as a whole
Is as good a summary of what Facebook has done wrong as anything I've read.
The problem is not that Facebook and its ilk are inherently evil, but that they seem willfully ignorant. Ignorant that past a certain scale they have an obligation to the public: an obligation very different from the laissez-faire world The Facebook started in.
The internet majors seem to be gradually awakening to this, but I'd argue that only Apple (with their stance on privacy) and Google (with their internal privacy counterparties) really grok the change. And to be fair, both have business models that can tolerate having principles.
When you've got a recommendation algorithm that could push someone to suicide or change an election outcome, you have a responsibility to optimize for more than corporate profit.
"when what's good for the business may not be good for the public both as a whole and as a set of individuals"
The byproduct of dominant market share in an industry where you influence people's thoughts is toxic responsibility.
And currently, some large companies are avoiding and externalizing the costs of that responsibility.
I see the Internet as a great force multiplier. Want to watch courses from top professors for free? Here you go. Want to buy a yacht? Here is some videos of 10 best yachts reviewed. Endless entertainment to last you a million years? Check. Want to slit your wrists? Here's five pro-tips to make it quick and painless. It certainly makes everything orders of magnitude easier, as it's supposed to.
If I'm seeking information or encouragement about suicide, technically an algorithm that provides me exactly that is just doing its job, and I don't see why we would like to change -or god forbid, police- that. What I'd see as a problem is when the algorithm becomes more eager to find this content than I am, fights with me to have its point of view accepted (like, messing with elections) or becomes fixated on providing me the content even if I changed my mind. So, maybe the best way forward is to enable to user to tweak the algorithm, or at least make it more responsive to changes in his mood/wishes.
That absolutely would be the way forward. However, my impression from blog posts where technicians explain their rationales and iteration processes behind recommenders and curation algorithms, the development most often seems to be motivated by growth, with the metrics that are actually considered being "user engagement" and "user growth".
As such, I would argue that recommenders always had an "agenda" separate from that of the user, it was just commercial rather than political: Keeping the user on the site for as long as possible.
As such I'm pessimistic that, in the current incentive structure, sites would make their algorithms adjustable by users just like that - doing that would simply be a bad business descision.
Hell, if there's one thing you keep reading in psychological books about suicide it's how institutions ostensibly meant to help people reinforce the suicidal thoughts. Either by placing suicidal people together, at which point they also advise one another on how to go "painlessly" (hell I remember discussing painless ways to commit suicide several times with a group of friends on the playground in high school. Not at all often, once or twice in 6 years).
(I must say, now that I know a lot more about medicine, what I remember: slitting wrists in the bath is pretty bad advice. Peaceful ? Sure. But takes a very long time, and easy to screw up in so many ways. Hell, just cold water is probably going to save you, and of course it will get cold)
Second thing they do is even worse: making communication about it impossible. This is done through repression. Either locking people in their room (or worse: isolation rooms)
I've yet to hear a single story of people being held responsible. Why should Facebook face this sort of scrutiny ?
Your local suicide prevention feedback loop really only encompasses your community, and revamping that system is left up to the people most affected by it. (The community)
Facebook/Google et al are everywhere, and are increasingly becoming everyone's problem. Google in particular has become so unreliable in terms of finding what I'm actually looking for without an overly specific query because it just has to push Google's idea of what they think I want, rather than what I want.
Honestly, I'm almost to the point of starting to figure out how to write and provision the infrastructure for web crawling and search indexing just because I find I simply cannot rely on other search engines to give me a true representation of the web anymore.
Of course engagement is expensive to do for humans and therefore is often explicitly not done in clinical settings, or to put it differently: hospitals are surprisingly empty for patients staying there and psychiatric hospitals are no different
Because you effectively can't do it with humans, preventing the "slide towards suicide", engagement, even discussing the suicide itself, is actually helpful.
A very recurring element in descriptions of suicide tends to be a long history of the patient with constantly dropping reaction/interaction/engagement and slowly increasing "somberness", suicidal thoughts and discussions, then suicide attempts. Then, days or sometimes less before the actual suicide you see a sudden enormous spike in engagement with staff, and while we obviously can't ask, it seems deliberately designed to mislead. And staff often "falls for it". That spike is designed to make staff give the patient the means for suicide or somehow prevent them from responding to it, or getting them information (essentially when they're not looking for some reason, such as watch change meeting)
When push comes to shove, once enough will to commit suicide exists, nothing even remotely reasonable will prevent the suicide. So knowledge about suicide mechanics seems to me much less destructive than people obviously think.
Therefore, knowledge about suicide doesn't matter much. People see it as being obviously associated and assume. Knowledge of suicide is not what causes suicides. It is therefore not "dangerous knowledge".
> > The issue is not just about making graphic content disappear. Platforms need to better recognize when content is right for some and not for others, when finding what you searched for is not the same as being invited to see more, and when what‘s good for the individual may not be good for the public as a whole.
Isn't this a good thing? It's very easy for politicians and bureaucrats to simply say "ban everything", so it's welcome that they're saying "it's complicated, we don't want to ban everything, we do want to make it harder for some people to access some content". It's a more honest discussion.
Maybe it's not only algorithms that have a problem: I think it's our very belief that Instagram should do something to curb certain ideas and push others that's wrong.
While calculating similarity scores is getting easier by the day across a lot of content formats (think image classifiers, sentiment scores, etc), the same is not true for suitability scores.
Technically, I'd think calculating suitability would need more than just matching patterns based on some selected criteria, which is how essentially all recommendation engines work today.
Then again, that course of action could be a slippery slope: what happens if the algorithms start censoring things that could potentially upset us? We could end up in bubbles, completely unprepared and unwilling to face the hardships that life presents.
I think the problem with personalized advertising is that it often isn't personal, since the algorithms base their assumptions on data gathered by observing people who haven't lived through the same experiences. I mean, yes, we can average things out and disregard outliers in the hopes of maximizing our finances, but by doing so we'd be neglecting the individual circumstances befallen a person.
I suppose this is the million dollar ethical dilemma that advertising companies are struggling with right? Too much moderation makes content stale, but a lack of it makes things dangerous.
There's a life outside the internet.
This was never the point. The article describes how people who have a predisposition to self-harm get an above-average amount of content related to that, which just compounds the negative feelings they may already have and thus possibly accelerate them actually making the step of harming themselves. Respectfully, but you seem very insensitive to the subject matter.
eg this example of media reporting guidelines that appear to have reduced deaths by suicide: https://www.ncbi.nlm.nih.gov/pubmed/18082110
> In Austria, "Media Guidelines for Reporting on Suicides", have been issued to the media since 1987 as a suicide-preventive experiment. Since then, the aims of the experiment have been to reduce the numbers of suicides and suicide attempts in the Viennese subway and to reduce the overall suicide numbers. After the introduction of the media guidelines, the number of subway suicides and suicide attempts dropped more than 80% within 6 months. Since 1991, suicides plus suicide attempts - but not the number of suicides alone - have slowly and significantly increased. The increase of passenger numbers of the Viennese subway, which have nearly doubled, and the decrease of the overall suicide numbers in Vienna (-40%) and Austria (-33%) since mid 1987 increase the plausibility of the hypothesis, that the Austrian media guidelines have had an impact on suicidal behavior.
If a gambling company sent sales representatives to Gambling Anonymous meetings to offer free bets to recovering addicts, there would rightly be a public outcry and that company would likely be penalised or stripped of their license. I can open an incognito tab right now, visit an online support forum for gambling addiction and almost immediately start seeing advertising for gambling; thanks to ad tracking algorithms, those advertisements will start following me around the internet. That behaviour isn't any less antisocial simply because it's automated and online.
The idea that gambling companies should be allowed to specifically target gambling addicts is not a popular policy position, but it's the default behaviour of online advertising platforms. Personalisation and targeting algorithms are innately amoral; they only reflect the values of a company or a society if they are specifically engineered to do so.
This isn't a binary argument between "the internet should be a lawless free-for-all" or "the internet should be regulated until it resembles network television". It is primarily an argument about corporate social responsibility - companies should not be insulated against the externalities of their business practices. We're never going to completely agree on how social media companies should behave and moderating content at scale is immensely challenging, but that doesn't give them a free pass to ignore the risks and negative effects of their platform.
If the Gambling Association made a bunch of posters about their new lotto and paid anyone who put them up in buildings, and the person running the Gambling Anonymous meeting came and picked up a few to put up during their meetings, do you blame the Gambling Association or the one who picked up the posters?
Ads are currently such a nightmare because almost everyone making money off of them has chosen to go with services that handle everything instead of filtering their ads and hosting them locally because letting those services handle it all pays better. It allows for far more tracking and targeted ads, sometimes for better and sometimes for much worse.
He's current secratary of state for the department of health and social care. He's by far the most tech-orientated SoS we've had for years, doing a lot of work to push digital in health. He's rampantly pro-IT.
Sometimes when politicians make requests like this (make it harder to access images of self harm) people dismiss them as "think of the children". That would be a mistake here. He's not asking for all images to be removed; he is asking for the malgorithmic pushing of self harm content to vulnerable people to be fixed.
People sometimes complain about laws that appear out of the blue. His tweet above is the start of a long slow proces of building a law. It's a clear warning: get better at self-regulating, or we'll regulate you.
The lead for suicide prevention in the UK (Professor Louis Appleby) has this to say: https://twitter.com/ProfLAppleby/status/1089528954158043136
"Self-harm images on Instagram just part of problem we need to address. In our national study, 1/4 under 20s who died by suicide had relevant internet use & most common was searching for info on methods"
and this: https://twitter.com/ProfLAppleby/status/1089525522084884480
"Important change in political/social attititude. Just a few years ago, internet seen as free space, no restrictions, complete lack of interest in #suicideprevention from big companies. Now mood is for regulation, social responsibility, safety."
Finally, here's my example of malogrithm ad placement. I've mentioned this example before, and I think it got fixed (so thank you if you fixed it!) but I search for suicide related terms for my work, and sometimes the ads are terrible.
You're absolutely right! One reading is that it's a request. Please fix this problem, before we have to regulate you into fixing it.
Is it possible that there may be an alternative reading? A cynic might suggest that humoring such a plea is a great way to demonstrate that content problems like this can be solved! Then regulators can require those very useful tools be applied to whatever they please in a much more general way.
The odds of whatever Secretary Hancock gets to solve the very real, pressing problem he has so wisely pointed to being completely inapplicable to literally anything else are virtually zero. I can think of a few places where safety and social responsibility means things like never disagreeing with The Party.
As technologists, it's on us to think through the consequences of our choices where we can. It's often not plausible - nobody thought TCP/IP would lead to malgorithmic ads! But tools designed to enforce arbitrarily defined social mores?
In times gone by we'd generally expect that children realise what happens in a movie or a videogame is fantastical.
By contrast, social media is treated as a set of interactions with real people, whether those be your friends or whoever else.
Even posting here on HN is an example. The platform guides me; my (and I assume your) viewpoint of what the development community thinks about things is swayed.
I don't think the platform creators are to blame as much as, well, the entire society we're in. We really need to push organic interactions with the communities we're in, the people around us, not online bubbles with incredible bias that aren't even necessarily made of real humans.
Couldn't these same systems be trained on moderator censorship to learn what to weed out?
Obvious, kind of funny, kind of sad.
That's pretty demonstrably not true.
I know that I've been trapped into YouTube's reccomendation trap before - good luck getting out without deleting all previous history etc. whilst I'm 'wise' enough to notice this fact and clear all YouTube history etc does this option even exist for Instagram and would kids/ those vulnerable think to do this?
- Okie, I won't be thinking about a pink elephant.
Machines are getting more and more intelligent. They find content for us, summarize things, generate speech and text.
Look at how complicated human sosiety is. Trying to directly program questionable and socially unacceptable behavior is next to impossible since border is way too thin.
There are loads of unwritten rules about minors, for example, that vary from culture to culture. About what is ok for what age.
So anyone who uses machine intelligence opens himself to liability. Filters are needed, and everyone needs same set of filters
Unfortunately, from a business perspective there isn't really much other choice. If Youtube solely put itself out there as a commodity video hosting site (with no discovery), then people could switch at the drop of a hat. Whatever they were using for discovery would simply grow a video hosting feature, and we'd get Newtube. As in all media, what matters is the captive audience.
The only real path forward for freedom is to repudiate this corporate-mediated garbage and start seriously adopting software based on peer interaction. As long as third parties remain in the loop, the incentive to blame them for not doing our preferred magical thing is just too strong. Hopefully this can happen before these censorship calls grow loud enough to start targeting alternative protocols themselves.
We’ve more or less tried both extremes of public vs private owners of capital. Could we try a commons based economy next?
I mean, at least developers, seems capable of producing massive wealth in a decentralized fashion more or less motivated by the public good (or aligned interests) as open source, with very limited capital assets. What if we could make more capital available to that part of the economy?
Trying to discourage self harm or teen suicide is totalitarian censorship? Such ridiculous hyperbole.
Behaviour influence for commercial purposes (via data-gathering and targeted-advertising) is one of the biggest topics that HN users are generally critical of regarding the big tech companies.
Surely we can be just as concerned about these mechanisms when they may lead someone towards serious/terminal harm to themselves or others.
When large media outlets* are not mindful of the kind of content they push to younger people, they're failing the society and should be hold responsible for it, this of course, is not mutually exclusive with parents responsibility for monitoring what kind of content their children are consuming.
* I am intentionally using the term media-outlet instead of social-media to bring the parallel with television and radio. Consider how you would think if TV or Radio pushed graphic content for audience they did as matter of fact know are kids?
You don't know anything about suicide. It's okay to be ignorant about a topic. It's less okay to be so ignorant while giving such stong opinions.
People die by suicide. For some of them the family all knew the person was suicidal and were powerless to prevent the death. For others nobody knew the person was suicidal, and we don't know if the death could have been prevented.
For the case talked about it's likely that the culture of self harm prevented her from seeking help. She was accesing content that gave advice about hiding self harm from others, and about the futility of mental health treatment.
This takes time, and is the reason why we need to treat kids differently and have legal and practical systems to protect them in their journey into mature adults.
> algorithms should be perfected but accusing them to be the cause of death of a fragile, insecure, mentally unstable adolescent is so utterly ridicule.
There is no question that exposing people to certain kind of content impacts their mental health and outlook. We also know that even a cursory look at any kind of material online can end up with you being exposed to "related content" all over the internet, so it is possible that a perfectly healthy kid could somehow stumble upon one or two such material and end up being bombard by it all over the internet.
But here is the important thing, the specifics of how or why these platforms are exposing kids to such content is irrelevant, or implementation details in the programmer speak, algorithm or not.
What matter is a simple fact that these platforms are exposing kids to dangerous content and we should do something about it.
How? Involuntary commitment to a psychiatric institution?
The act of growing up is fundamentally one of rebellion, of doing things one is not supposed to do, of running away from the nest and being pushed away from it. While the expectation of privacy is something that varies dramatically across the world, I am doubtful that there is any culture where parents know everything their children are up to. The internet has amplified the availability of this privacy. As a closeted transwoman, I am grateful for the opportunities this has given me, of being able to find solidarity, but still keep it as one of my deepest secrets. But... I am also thankful for having grown up just before the internet turned into a vast, self-amplifying panopticon.
In the pre-internet world, there was more human mediation in information consumption: one consulted libraries which were staffed by human librarians, who could distinguish access to books about self-harm from books about self-help, one read magazines delivered home by mail, which would presumably have been easier for parents to monitor, or one would socialize with friends, in the physical world, who presumably had a greater interest in your well-being, so that empathy and other human judgments could best help people in need.
In today's world, with ready access to information, anonymously to one's physical circle of family and friends, but entirely publicly to the vast tracking network of automatic recommendation systems, there is no such ready access to help. The systems serving information are essentially paperclip maximizers, providing access to articles and links that maximize their narrow-minded objective functions. Which leads to sub-optimal social outcomes.
Of course the algorithm is doing exactly what it was designed to do. But what it was designed to do is not what we really want it to do, as a human species, with complex, empathetic, and altruistic objectives. On a final sidenote, conveying emotions online is very tricky, but we are talking about a family who has just lost their daughter. Her parents are almost certainly devastated, questioning every little act of theirs. It would be nice to exercise restraint and temper one's comments.
You do not need agressive autoplay, recommendations plastered everywhere and "we miss you" emails every few hours to organise the world's information.