Hacker News new | past | comments | ask | show | jobs | submit login
Tech firms must tame toxic algorithms to protect children online (ofcom.org.uk)
52 points by w14 16 days ago | hide | past | favorite | 117 comments



People love these proposals until they read the details and think of the consequences. Anything that requires "robust age-checks" means that everyone using the site must go through an ID check and validation process. No more viewing anything without first logging in via your ID-checked account

> 1. Carry out robust age-checks to stop children accessing harmful content

> Our draft Codes expect much greater use of highly-effective age-assurance[2] so that services know which of their users are children in order to keep them safe.

> In practice, this means that all services which do not ban harmful content, and those at higher risk of it being shared on their service, will be expected to implement highly effective age-checks to prevent children from seeing it. In some cases, this will mean preventing children from accessing the entire site or app. In others it might mean age-restricting parts of their site or app for adults-only access, or restricting children’s access to identified harmful content.

Before people try to brush aside these regulations as only applying to sites you don't think you use, the proposal is vague about what is included in the guidelines. It includes things like "harmful substances", meaning any discussion of drugs or mushrooms could be included, for example.

Think twice before encouraging regulations that would bring ID checking requirements to large parts of the internet. If you enjoy viewing sites like Reddit or Hacker News or Twitter without logging in or handing over your ID, these proposals are not good for you at all.


Already happening in the EU. Give your phone number, credit card number, or government ID number if you want to watch age-gated YouTube.

The best solution would be a government or banking API that emits a one-time token. No logging, and it self-destructs upon verification.

But aside from the user, no other party has detriment from the current situation.


That's a really interesting idea. I was just wondering how any of this could be prevented in a way that preserves user choice.


While there's some complexity in the details of how you'd implement the protocol and avoid replay "attacks", there are potentially ways to use Chaumian blind signatures so that an age verifying authority can (blindly) sign a token you present after verifying your age (through some means that likely won't be anonymous), in an unlinkable way

As an overly simple thought experiment, you could generate a random ed25519 ephemeral public key, hash it, then send it (blinded, and thus unreadable) to an age verification service (with some long term age verification credential or similar).

The age verification provider does a blind signature on your (blinded) public key hash, and sends it back to you. You un-blind that signature (meaning that provider can't identify which identifiable request led to it, but now it bears the hash of your public key), and you can now authenticate to a service by signing a challenge with your ephemeral ed25519 private key.

The service only knows your ephemeral public key, and that it has been "vouched" (signed) by the age verification provider.

The age verification provider knows "you" asked for a token , but doesn't know what public key you used.

Clearly there are challenges with replaying (authorised user could share the private key every day with a group of others), and revocation of a credential whose private key gets shared among a group is hard (beyond providers blocking a public key).

The risk is that this becomes a race towards "DRM" and platform attestation/authentication to try and prevent private keys being exported.


Maybe someone could chime in (cryptography is not my strong mathematical field), but isn't the entire point of ZK proofs(which are the hot buzzword in crypto) that you should be able to be verify certain information (i.e that you are of a certain age) without leaking any other information(which is similar to what you are proposing)?

Surely this is a better application of that rather than proposing another L2 to scale Ethereum?


Good question.

By my understanding , in principle yes you could use ZK proofs - you can imagine it as a way to prove a certain assertion (age >= 18) in a way that isn't directly linked to other data. You sometimes see this in conceptual ID card specifications - using keys on a smart card to give a signed attestation about a single attribute without sharing other ones.

Ultimately though, when you need to actually implement it, you'll end up needing the same core concept as the thought experiment above - you'll need one or more "trusted central authorities" whose word is trusted on a given asserted attribute (age, etc).

They'll need a way to prove that they vouched for a user (as there's no digital way to validate their age as that's an unverifiable claim). You'll then need a way for the "bearer" of a ZK proof to tie themselves to that trusted central authority's attestation, and you'll need a way to prevent the information needed to generate that ZK proof being shared with others for replay.

A ZK proof will still need that external trusted authority for an attribute like age, because age isn't something you can root some kind of cryptographic trust from.

I'm not an expert in the ZK crypto either, but it doesn't deliver a magical ability to prove a (biological analog world claim) without chaining back to a trusted verifier of said claim, and effectively delivering that sort of "thought experiment" protocol.

Sometimes though, the complex solutions tend towards the simple - you could issue people "age verification" smart cards (if you have enough confidence in CC EAL6 or similar cards, and their side channel resistance) which are "group keyed" with common attestation signing keys for every million (or another suitable anonymity threshold) users, and share the public keys used (to allow verification you haven't been given a special unique public key), and then allow signed card-issued anonymous attestations. That would work for as long as you can keep the smartcard-backed key secure against side channel/ key extraction attacks.

The user adoption challenge in all this is getting users onboard and demonstrating it's a private solution rather than an excuse to oversee their online activities more, but I do believe you could do this in a manner that's going to make it easier to just identify them from IP address and adtech trackers or similar external means.


There are so many inefficiencies and security risks that individuals have to put up with simply because the US federal government (or any other federal government) have not provided an identity verification API.

From banking to buying concert tickets, a way to prove one is human could be invaluable to ridding the system of the myriad proxies we currently use that inevitably result in discrimination.

The crazy thing is it would be dead simple, the hardest part, having physical infrastructure all over the country is already done. The US Postal Service already verifies people's identity for US passports.

Combine this with a constitutional inalienable right to receiving and transferring money to an electronic money account operated by the federal government, and we could get rid of so many inefficiencies.


The so-called "tech" companies love to tell the world, especially their advertiser customers, how they know "everything" about the people who use their websites, with ridicuouls claims such as knowing more about users than users' own friends and family. Certainly the knowledge they claim to have would include the age. If not, then any claims by so-called "tech" companies that they can serve targeted advertising to people in a certain age bracket are false.

Whereas if their claims are true, and they do know the age of their website users, then these so-called "tech" companies can solve this problem without needing to do age verfication. By not targeting people in certain age brackets with certain content, they can stop the politicians from proposing legislation that requires age verification. But they refuse to do so.

Interesting.


wouldn't they have to guarantee no false positives ever? so any child being misidentified as an adult would be a problem. the result would be that the targeting is so strict that to many adults would be excluded. and that matters even more if we consider that young adults are among the most lucrative demographic, which makes false positives more likely.

you can easily tell a 30 year old from a 10 year old. but can you tell a 12 year old from a 15 year old? or a 15yr old from a 20yr old?

so tech companies want a system that is approved so that they are not responsible if it fails


Oddly, I think I agree with you. What we really need, and that is the reason why it does not even enter the conversation, is standard default that we can agree upon.

By this I mean, no more fucking around with timeline. We agree that any corporation introducing a feed based on anything other than date from newest to oldest is subject to penalties and sanctions.

Will it stop 'innovation'? God I hope so. I am tired of innovation that farms up rage.

edit: From here we can start working on what algos CAN be included in customer facing crapola.


maybe we shouldn’t use government intervention and force as the default in all of these things.


What I can tell you for sure is that self-regulation was not very beneficial to the society as a whole ( see current cell phone impact on youth ). As to whether government regulation is a bad idea, at this point I believe it is an interesting academic objection thrown in that happens to be true often enough to an extent, but then hijacked by corporations trying to avoid actual regulation.. ie. biased at best.


I don't think you can write off self regulation so easily. I don't think anyone is claiming that it's a complete and perfect harm reduction. It usually takes time for social norms to develop, especially in the face of rapidly changing technology.

However, the alternative is very grim. It is essentially conceding that the average human is not capable of directing their actions, and they should be controlled by a higher power.


And this is the argument that I am willing to accept. We should be able to find some happy medium. I would hate to be told that from this point on I can only use quick sort by government decree, but you have to admit that current social media/tech has gotten out of hand in terms of power they wield.


I would argue that the governments have also gotten out of hand in terms of the power they wield. As a result, I think that we should be careful to ensure that any new developments are clearly empowering the individual, and not just claiming to benefit them.

Im for regulatory options that put more power in the hands of users so that they can solve their problems. I haven't decided what this means for age verification and algorithms, but there are some interesting options in this thread.


As someone from the cell phone impacted generation, I wish our elders would spend less time trying to protect us from the internet and more time building homes.


But who would buy them? Generation Y+ can't afford them.

I'm only half joking: The issue isn't the supply of new housing, it is the cost of building new housing. If we fix the cost issue, the new houses will follow.


> The issue isn't the supply of new housing, it is the cost of building new housing.

I fail to see the distinction. The cost barriers (most of which are legal/zoning related) reduce quantity supplied.

I don't believe it costs 2 million in materials or labor to make a condo in California.


I think tackling zoning is a good example of addressing fundamental cost issues instead of supply.

Cities actually building or mandating new 2 million condos for the poor would be an example of trying to fix supply without fixing the cost issue (which happens).

In general, reduction of regulation is needed to decrease cost.

I have family that just built a home in a county without zoning or building codes, and I can assure you it was quite cheap.


We all wish for an ideal future and, heavens know, it is a good thing that I am not an emperor for a day as a lot would change. Personally, and I mean it in a nice way, I am not obligated to build you a home. I am not even obligated to do it for my kid. Frankly, neither is the society as a whole.

You want your elders build you a home for you. No deal. Best I can do is help you along by pulling you away from your cell and saving your attention span a little.


I've always sort of felt that the whole "society is little removed from anarchy and no one owes anyone anything" is not all that far removed from "I can hit you on the head with a hammer and take your stuff if I feel like it".


Re: the hammer

Many people (myself included) think this is the true nature and shape of the world. There are lots of layers, institutions, and policy built on top, but what matters is ultimately who holds the hammer and what do they want.

This isn't as cynical as it might sound. Most people ultimately hold some real-world power, and have organized into systems that help them get what they want. Anger a human enough and they will withhold work, anger them more and they will resort to violence. This is the basis of all society.


> Personally, and I mean it in a nice way, I am not obligated to build you a home. I am not even obligated to do it for my kid. Frankly, neither is the society as a whole.

I'll happily pay many multiples of what homes used to cost - just make it legal to build homes.

Let us do the things we want without having to cut through a thicket of laws intended to help and protect us. We don't want them.

Please, we've had more than enough of y'all's help.


I love your attitude. I hope you know that there are tons of people in older generations who also agree with removing nanny regulations.


Such a rule is impossible to write.


It is a lot of things ( overbearing, difficult to implement, world-changing ), but is not impossible. If you have any doubts, check OFAC rules and regulations ( and note how some seem contradictory in nature ) and see how regulated institutions respond to those.


These kinds of proposals will hand the 'net to the darknet [its proper successor]. I can't wait for Freenet/Hyphanet to eventually get onion routing!


It's not clear to me that every site would have to perform age verification themselves. Seems like the "I am a minor" flag could be managed at the client operating system level (e.g. as a property of the cell account on mobile devices, or as a property of the user account on a laptop or desktop machine), and transmitted per-request (e.g. in a HTTP header).


Then we would have mandatory online-only accounts for the OS "for your safety". This sounds even worse.


This might only be marginally relevant, but California’s digital id has a way of verifying age without revealing anything else about your identity called “TruAge”


> California’s digital id has a way of verifying age without revealing anything else about your identity called “TruAge”

I'm not familiar with the system, but I assume it would necessarily have to reveal the sites you're verifying with to the State of California.

So it's less of a big deal, as long as you're okay with sending a record to the government about what site you're visiting every time you want to sign up somewhere or re-verify your age.

I'm sure someone in the comments will propose some cryptographic solution where neither party knows anything other than the fact that someone, somewhere, possesses a token associated with a person over the age of 18. If you think this is viable, you're not thinking like a kid trying to get around this system, nor a blackhat trying to take advantage of it: Many people would immediately set up a service that handed out age verification tokens in exchange for viewing some ads (the file sharing site model) if there were no limits and nobody could trace it back to the source. Any ID verification system must necessarily have some party able to verify the person to avoid abuse like this.


> TruAge encrypts your data points and then protects them even further by creating anonymous tokens. These anonymous tokens cannot be traced back to you without legal authorization from a court-issued subpoena

Yes, I think you are right. There is probably a way to make a fully anonymous scheme.


> There is probably a way to make a fully anonymous scheme.

A fully anonymous scheme would be ripe for abuse: People would immediately take their keys and set up websites that exchanged age verification tokens for watching ads. Kids would visit these websites, watch an ad for 60 seconds, and get a fully anonymous age verification token in exchange.

Identity verification systems only work if everyone involved has some incentive to protect their identity. If the identity means nothing and nothing can be traced back to you, the tokens will be generated for next to nothing and handed out freely.

The idea is DOA.


>legal authorization from a court-issued subpoena

No good technological solutions which min-max on maximizing user sovereignty and privacy will allow the possibility of [GREENTEXT].


I always figured it'd be implemented Stripe style where completing age verification just gives the site a token that they can use to validate the third party age check.

The problem is how to make the provider side anonymized so that they don't know what sites your visiting, but that could be probably solved with legislation. In California, at least. I wouldn't trust Congress with a bill like this.


Agree. The idea is to verify your age, not harvest all your PII data across every login and viewing session. Companies can easily implement this privacy-preserving step, they just won't until it is strictly enforced.


Agree in your vision of the ramifications, but I feel that it's their response to a basically unregulated big tech. Their powers are limited.


Who cares. I want the internet to be an ecosystem without any privacy (which is a drastic change from what I believed for most of my life). I believe anything of value will be available even without anonymity. For something that absolutely needs anonymity people will find workarounds and I appreciate the extra technical barrier when they have to do so so it’s not available to very pleb.


The thing that always strikes me in all the reporting and discussion of the problems that Ofcom is trying to solve is that no one seems to ask if the problems are equally bad in other countries, especially in non-English speaking countries. And if it isn't then can, and should, whatever helps there be implemented in the UK?

I live in Norway and it doesn't seem that the problem is so severe here. Or is it simply that English speaking media is more willing to latch on to extreme events and make out that they are the norm?


It's a difference of scale. English is dominant in the number of countries that speak it and is even more dominant among the online population (people from countries that don't otherwise use English who are "overly-online" tend to use it).

There is a larger population of bad actors, a greater variety of underlying cultural/philosophical differences and thus conflict, and the algorithms that seem fine for a smaller contained country like Norway can produce a different quality of topics at a larger scale. It's not just algorithm thresholds either - people are simply more naturally prone to follow fads when there is multinational scale affecting the quantity and rapidity of the content and replies/likes that they see (dopamine and confirmation bias).

Unsure what the solution is - maybe more location-based weighting of suggestions? Conversely you don't want to empower local predators. So far attempts at moderating the entire English speaking world by the standards of SF-cloistered young professionals and PhDs has also been unwieldy and led to backlash.


There's a general election coming up in the UK, probably later this year. No date announced yet but they're starting to get TOUGH ON CRIME and other things in preparation.


I think you're just missing it, or not enough media attention.

Youtube is full of weirdass kid content. The ones I've seen are mostly weird / scary, on the border.

It's all garbage, but youtube algorithm recommends it more and more the kids I've seen because it notice the longer screen time.

I wouldn't say it's downright illegal content, but definitely not tasteful.


> no one seems to ask if the problems are equally bad in other countries, especially in non-English speaking countries.

Oh hell yes it is. Everywhere that Russia has interest in has been fraught with serious issues. Germany, Poland, France come to my mind - there have been reports concerning especially the spread of far-right and/or pro-Russian content for years now.

> I live in Norway and it doesn't seem that the problem is so severe here. Or is it simply that English speaking media is more willing to latch on to extreme events and make out that they are the norm?

You guys are simply too small to matter and have been in NATO from the start. In Sweden and Finland though, I had read reports of pro-Russian propaganda problems when they were in the process of joining NATO.

The unifying link is always Russia.


Is it possible that perhaps people just have differing views on Russia and great power competition than you do? I find the Russia influence story overstated


Sometimes I feel like there's some kind of gaslighting operation going on to make people forget about how much western Europe was in favor of things like NordStream (2). Russia definitely wasn't always the Boogeyman people now seem to claim. A lot of people had to be ok with Russia for things like NordStream to happen.


We're also supposed to pretend that Russia was behind the NordStream sabotage.


> A lot of people had to be ok with Russia for things like NordStream to happen.

And Russia paid a lot of people to think that way, either directly (Schröder) or with extremely cheap gas.


On most countries, the stuff this article is proposing to fight against "for the children" is already illegal to push to anybody.

So, on most countries this proposal would be completely useless.


>far right this

>far right that

>*crickets* far left *crickets*

Man I am glad years ago I watched Yuri Bezmenov's interview with G. Edward Griffin.


> Man I am glad years ago I watched Yuri Bezmenov's interview with G. Edward Griffin.

Do tell. Content? Main ideas? A link? Anything?


Two youtube video IDs, should be the same thing:

/watch?v=s2b-I0Yqisc

/watch?v=9apDnRRSOCk


The anglosphere media is much more anti-tech than most other countries, so they focus on this stuff - and it’s also a double US/UK election year


Quite likely that most AI is better suited to English and all the big tech companies are heavily into AI with their algos.


> if the problems are equally bad in other countries, especially in non-English speaking countries

Well, Myanmar had a genocide blamed on social-media (at least in-part), so arguably things are worse there, not just equally-bad.


US/UK really like spying on their citizens. Privacy is not their thing.

Their culture regards anything sexual as Taliban regards women not covering their head. As an extreme taboo. As a fellow European, yes, I know, it's crazy.


Eric Blair's Anti-Sex League is coming into full swing, innit?

With a century of public policy fuggery, the domestic stock has been induced into a situation where it refuses to effectively breed its next generations in comparable numbers to other competing stocks. It could be mistaken for accidental if the elite weren't then back-stopping tax and investment losses with importing supply from those other stocks.

If The West is purely cultural/ideal, then mere assimilation should be enough for its persistence. If The West is its people then this situation spells certain doom if not arrested.

The West's place at the top of the world is going to be toppled by a nation with less feminism and anti-natalism this century, unless The West can destroy all other effective competition.


If you are european, this is basically glass houses and stones


It's true, Europeans emit crazy Hitler particles when they see a Roma person.


Something important to keep in mind: most people never experience just how twisted these recommendation algorithms can get, because each of us gets an experience tailored to our developed tastes.

But these algorithms will totally curate wildly disturbing playlists of content because it has learned that this can be incredibly addicting to minds unprepared for it.

And what's most sinister is how opaque the process is, to the degree that a parent can't track what is happening without basically watching their kids activity full-time.

Idk if OFCOM is implementing this right or not, but I think there would be a much greater outcry if more people saw the breadth of these algorithms' toxicity.


If, instead of a machine algorithm, we wired newborn human brain in a jar and "trained it" to choose the next thing to show in a feed, to get nothing but reward for when someone clicks/comments/lingers on something, and nothing but punishment when they leave, plus the basic human functions of walking and talking etc, then put that brain into an adult human body, what would we describe that person as? Probably a psychopath with no moral compass? A sicko willing to do indescribable horrors? Evil incanate? A calculating manipulator for sure. But somehow when we "train" a machine model with singular goal of profit alignment, people think that's just the free and efficient market. And the idea of aligning the models with human good instead is seen as overregulation.


It's quite obvious that Twitter/Google/Facebook/whoever do not have algorithms that scale where they can genuinely curate their content. Seems that obvious since Google bought YouTube.

Isn't it quite obvious that it's never been their prerogative. Nor protecting copyright.


And it shouldn't be their prerogative!

If we're going to randomly throw blame around, then why not throw it on the ISP too? They're the ones ultimately serving the content. But I don't think anybody wants to open up that can of worms.


The blame is not random.

ISPs don't know what content they're serving up other than what host it's coming from 99% of the time.


Isn't it the parents' job? Why introduce authoritarianism under the disguise of caring for children?


The internet is a big place. How am I supposed to manage what Google / Facebook / YouTube etc. serves to my daughter?


Is there a reason you can't supervise your child's time online and/or use appropriate parental control software?


I suppose I would flip the question around - why should I have to do that, when content platforms could do it? It's just another administrative burden that gets foisted onto the plebs because large tech companies want to scale, but won't moderate content properly at that scale.


The content you do not want your child to interact with is your personal decision, and of course, varies from parent to parent. There is no permutation of acceptable administrative oversight to this issue I can imagine that would satisfy everyone reasonably, nor is there one that would not have chilling effects on free speech.

When it comes to content that is illegal specifically - that of course, should fall on tech companies to moderate. But that is the exception, in my view.

In short, your child's oversight is not one-size-fits-all - it is strictly your business, and perhaps your school's and childcare professionals'.


> why should I have to do that

Because you're the parent?

Yeah you can't monitor 100% of the time but like.. moderating your child's experiences is kind of part of the job isn't it?

Edit: I'm not saying the tech companies have no responsibility at all here, but surely the parent is the final responsibility in these matters?

When I was a kid if I went over to a kid's house and their parents let us watch R rated movies or whatever, if my parents didn't like that they would talk to the parents. If that didn't change, I wasn't allowed to go over there anymore

Why not the same with YouTube? If YouTube won't change, isn't it your responsibility to remove access?


> Yeah you can't monitor 100% of the time but like.. moderating your child's experiences is kind of part of the job isn't it?

Is the subtext here "don't have children if you can't do the job"?

If it is, then it's valid to discuss the difficulty of predicting what exactly the job of parent involves when it can drastically change due to technology and social norms over the interval of 5-10-15 years between making the decision and executing the role.

This isn't a blanket statement to abdicate responsibility, nor a blank check for unlimited responsibility, but certain unanticipated challenges are expected and some grace must be given in light of a dynamic environment.

Responsibility is an abstract concept that we operationalize in order to make judgements and decisions. Like any operationalization problem, how can you be transparent around its construction?


> Is the subtext here "don't have children if you can't do the job

I was going more for "you made the choice to have a kid. You have the job whether you want it or not, so you better step up and do it"

But I suppose the corollary of that is what you said. I don't think it's very valuable to say that to someone who already made that choice though

Anyways, like I said, I don't think Tech companies have zero responsibility here, but the buck stops at the parents, period

This generations parents should not be trusting algorithms not to show their kids bad content any more than 90s parents trusted the teenagers at the movie rental place not to rent children R rated VHS tapes

Edit: Television was a highly regulated and curated feed of media, maybe parents got a bit too comfortable letting their children sit in front of that without concern. But treating on-demand internet content like Television is a mistake

And expecting "the algorithm" to deliver a similarly highly regulated and curated feed is also a mistake


Would it be fair for parents to say, "My throughput is capped and what was possible in the VHS and TV area exceeds my ability. However, if rather than being completely uncurated, social media systems participated in curating their content so that it had a minimum standard, much like TV did, I would have the bandwidth to responsibly curate the remainder for my kids"?


Did someone do that to your internet experience in 1999? Why do you think it needs to be done now?


Are you also leaving your daughter unattended in a big city at a McDonald's alone and just go about your business, expecting them to take care of her?


yes, see the discussion on free-range kids


This is a pretty weak argument IMO. Try applying this argument to any other regulated context:

- Should McDonalds be allowed to put crack in their happy meals or is it the parents responsibility to keep those meals away from their children?

- Should kids clothing be free of asbestos or is it the parents job to avoid buying that?

- Should baby formula be free of lead or is that the parents responsibility to check for?

If a company is deliberately pushing a product that is harmful and (arguably in this case) addictive to children, that is a problem regardless of the parental role.


Hmm, I don't think that they're allowed to put cocaine or lead into food marketed for adults, either.


Perhaps a better example would be tobacco or alcohol. Also bad for adults, but we've at least agreed we should try to mitigate the harm that comes with early-development addiction.


If your problem is with the class of product, then you should know to stay away/keep your kid away. If the product has broken with the class (say, adding asbestos to kids' clothes), then we would want to regulate that. Really, as long as the product is staying within the general confines of the definition of the class, then it's up to the consumer to educate themselves and know better.

For a fun and controversial example, one could look at the RNA vaccines for COVID. Those had some properties that separated them from traditional vaccines such that people relying on the class of "vaccine" might have felt misled. As such, you would have expected government regulation to inform the consumer on the difference to expect in that scenario (which the government did).


I think you're saying that instead of regulating the experience for kids, parents should keep their kids away from that class of product entirely. In essence, treat all social media as adult-only products.

The biggest difficulty is coordinating this approach with other parents and institutions - it is punishing to be the only kid without a smartphone when peers (and increasingly, institutions) require you to have one to participate.

But that issue aside - it is still strange to allow for this class of products tailored to kids but parents are just supposed to universally agree should not be bought. There is clearly a society-wide issue here that we've left for individuals to solve. Predictably, it is not going well.


it is still strange to allow for this class of products tailored to kids but parents are just supposed to universally agree should not be bought

The US is all about freedom. We also allow people to smoke even though smoking kills more than AIDS, alcohol, car accidents, illegal drugs, murders, and suicides combined. That's weird...but we still allow it. Conversely, I would expect the EU to implement the kind of legislation you're advocating for.


In my view, we need legislation to step in and enforce some level of algorithmic tuning. Modern algorithms drive engagement at all costs, regardless if it's healthy for the individual. I want to be able to tune the algorithm to potentially use a timeline feed instead, or limit content to only come from topics I subscribe to, etc. We probably need parental controls that allow parents to enforce algorithm tuning as well.

A recent example of an algorithm going wrong is Reddit. Home used to show you strictly a feed of reddits you subscribed to, and it was shown as a timeline. The most recent changes not only removed the timeline approach to the feed, it's now injecting subreddits you don't subscribe to and asks if you're interested in them.


It's curious how aligned this is with similar moves in Canada discussed here: https://news.ycombinator.com/item?id=40298552

For those unfamiliar Ofcom is basically the UK telecoms regulator.


I have a better solution : tech firms must stop using toxic algorithms for everyone, not just childrens. Why are they allowed to use these practices in the first place ? Why do we have to endure/tolerate this stuff that makes internet a worse place ?


No intention of protecting adults?


As an adult, I do not want to be "protected". I can decide for myself if I like the algorithm or product/service and choose not to use it if I don't like it.


Given that statement, I'm not sure you're fully an adult.

Every single person living in modern society is "protectected":

- You're protected from people deciding which side of the street to drive on

- You're protected from a bank who tells you the money you depisited is "all belong to us"

- even a "market" can't exist without "protection"

Once every person has their own army of ML engineers and psychos at their disposal, then there is an oportunity for personal choice in anti-social media.

Or much simpler, the anti-social corps can be required to allow each user to configure their own feed algorithm. This would be a useful "protection".

The negative consequences of not having _any_ "protection" in anti-social networks are clear across society.

These are analagous to the damage done in the US by lack of gun protection. Guns are now the leading cause of childhood death in the US. Combine guns and 4chan/Discord and you get the rash of idiots shooting up the populace and then themseelves.


The problem is in modern societies the society is footing the bill for those who choose, let's say, unwisely.

Why should my mandatory social and medical insurance taxes go fix issues created by such wrong choices? Should the society not cover them? E.g. smokers with lung cancer should pay out of pocket? Or should society try to minimize wrongdoing instead?


Indeed, there are different interpretations of the social contract.

You have essentially two polar options.

First option is that social services come with strings attached that entitle the providers to control the lives of the recipients.

Second is that social services are freely given, and the recipients have not given up their autonomy. the givers can choose to stop giving, or place conditions on gifts, but they dont get direct control over the recipients.

Either are fine in theory, but the problem is with ex-post recontracting. for example, when a gift or service is freely given, and then someone demands payment later.

There are some interesting works of fiction that explore alternatives. For example, allowing adults to choose how much of their autonomy they want to abdicate for differing levels of entitlements and guarantees.


So you're saying you want to be "protected" from pwople who make these unwise choices?

I would agree with this, in the sense that the people who institute predatory algorithmic human interfaces are the ones who chose "unwisely".


> is in modern societies the society is footing the bill

This isn't exactly true.... All societies always have footed the bill, modern or not. If you cast the person out in a middle age society then expect to get robbed by bandits on your next visit to the town next door.

Which emphasizes why you should attempt to minimize wrongdoing before it gets expensive, when you don't it turns into that story of "there was an old lady who swallowed a fly"


Adults could potentially protect themselves, children absolutely can't.


I'm an adult, history of ADHD. Youtube, facebook, reddit is a disease for me. I am a moderator of some niche subreddits but if I stray away I will find myself stuck reading comments and engaging in conversations for hours.

I take steps by not using my phone during work or after a certain time but it is extremely difficult and I still fall into the trap and watch some shorts for 2-3 hours at a time. I'm not stupid either, graduated with honors in CS and people consider me really smart. My brain just likes how it makes me feel.

Now, I have a fear what all this stuff will do to my child, if even I have trouble with it.

I hate almost everything about the modern commercialized web and really miss irc and forums.


I am undiagnosed (from a parent who was ADHD anti-diagnostic) as an adult but strongly suspect I too. I've never taken an amphetamine to see if there is improvement.

Privacy reasons aside, a big reason I ditched social media and much of the normiesphere technology is the distract-ability. I take great care and pride in reducing unwanted automated interruptions to zero. My phones only make a noise if a cleared human on the other side is initiating the noise. There are zero "check this out" or "your coins reward is ready" crap pinging me, and I won't have it!

ADDENDUM: my abridged phone stack is:

GrapheneOS (there is nothing better)

Mullvad (audited by police raid) & Tor

Syncthing-Fork (because f' the cloud)

Fennec (+uBlock Origin,Sponsorblock,Tampermonkey)

Tor Browser Bundle (anti-glowie browsing)

BraveNewPipe (multiplatform and search settings)

Seal (decent yt-dlp wrapper)

Session F-Droid & Briar (anti-glowie messaging)


> Youtube, facebook, reddit is a disease for me. I am a moderator of some niche subreddits

I had a friend who developed a problem with alcoholism. He went through a period of time where he'd stop by the bar several nights a week to have a single beer because he thought it was important to his social circle.

To no one's surprise, that one drink often turned into two drinks. Two became four, then six, then eight, then he was blacked out on a Wednesday again.

It took some time, but he finally admitted that his habit of visiting the bar was a trigger for the exact behavior he was trying to avoid.

When I see someone talk about how they have a severe social media addiction that they're trying to overcome, but then go on to talk about how they're a moderator on the platform they're trying to avoid, I see the exact same thing.

I don't know what else to say, other than that my friend is doing fantastic now that he's stopped visiting bars at all. He discovered that once he removed the bar as an option for his free time, he realized how many healthier alternatives there were in the real world. He wasn't constantly playing the game of moderating his alcohol intake because he wasn't in environments where he had to. He's much happier now.


For now at least, reddit doesnt seem to have all those dark patterns you see on say FB (forced refresh if you turn your screen off on mobile, minimum and deliberately bad moderating).


> reddit doesnt seem to have all those dark patterns you see on say FB

Reddit is one of the worst offenders for rage bait, algorithmic feeds, content farming, and vacuous content.

> (forced refresh if you turn your screen off on mobile, minimum and deliberately bad moderating).

Reddit absolutely refreshes your feed under certain circumstances depending on the platform, such as hitting the back button.

Your claims about moderation are also completely backward. People complain about Facebook being too sensitive about moderation, but Reddit is basically the wild west. Nearly anything goes on Reddit and the mainstream subreddits are frequently full of highly upvoted misinformation.


this is so blatantly false I have assume you made an impossibly improbably amount of typographic errors.

reddit is 95% bots echoing the sentiments of a few liberal elites and the 5% of real users are childless aged women. No sexism, this is facts.


When facing off against an army of ML engineers and psychologists?

It is obvious that even adults are suffering negative consequences from social media, and if it isn't obvious, take a look at any of the studies showing the negative impact on adults of social media usage.

Every single consumer is outmatched. The only winning choice is to not play at all.


> Every single consumer is outmatched. The only winning choice is to not play at all.

If someone can't handle their social media usage, you're right that they should stay away.

But this idea that "every single consumer" can't handle it is patently false. The vast majority of social media users aren't completely sucked in to their apps and phones. The time spent on platform graph for every platform has a long tail where the addicts reside, but most people just don't get in that deep.

It can seem that way when you're surrounded by addicts or, more commonly, when your social interaction comes primarily through social media (because addicts are over-represented, obviously). However, what you're missing is all of the people who aren't engaged in social media all day every day.


While I 100% agree with these statements, I still think children should take priority and are more vulnerable than the vulnerable adults. Eventually this whole dark pattern enterprise should be stopped. I see it spread its tentacles towards children and children who are compromise so early in their development may be damaged too much to be able to have a full recovery.


Children have legal guardians, who are responsible for their safety.

If the adult an protect themselves, they can protect the children they're responsible for too


The adult is vulnerable as hell but children are an order of magnitude more vulnerable. Consider an adult with ADHD who can't stop themselves from these traps and consider a child with the same condition, possibly the child of a parent with ADHD. The adult is at least aware something is wrong, for the child there's no baseline and the patters are ingrained deeper.


next week: music studios must tame toxic lyrics to protect children.


Interesting... if you sob and moan to YT or Instagram about not having enough followers or views they'll tell you to replace the word "algorithm" with "audience", so people. It makes sense, if your content is not popular with people no algorithm will surface it (recent tweaking of Instagram algo notwithstanding). But if we follow that interpretation we have to admit that it's not the algorithms that are toxic, but people. So what Ofcom is asking tech companies is to "tame" toxic people. Good luck with that. Parents have to realize that computers, phones, or tablets help sometimes unsavoury characters get in touch with their children. We do not allow strangers into daycare centres, schools, or children's hospitals, so why do we allow strangers unrestricted access to our children via the devices we give them? Parents need to be told to take responsibility for who has access to their children.


> We want children to enjoy life online. But for too long, their experiences have been blighted by seriously harmful content which they can’t avoid or control. Many parents share feelings of frustration and worry about how to keep their children safe. That must change.

Yes, stop letting kids stare at screens all day. Yes, you are a bad/lazy parent letting the firehose of the Internet pipe into their heads.


Yes, and stop using screens as a pacifier. People don't like to be told how to parent, but I'd settle for them just parenting. Shoving shit in your kids hand to shut them up isn't parenting.

I used to blame the shitty influencers and internet at large for the selfish greedy brats, but it's their parents. I've met too many exceptions where the difference was just "We limit their screen time." Maybe they end up shitty in some other way, I dunno, but at least they're going to be more functional than my nephews who my SIL is constantly trying to figure our how they get sucked into racism and greed and what to do about it. Stop letting them get all their views from greedy racists online would be a start - can't do that, she'd "never have a moments peace." Idiot human.

Side note, if you let your kids play roblox unsupervised, you're fucking up hard.


Children at school use a screen for school work starting in middle school (and sometimes even in elementary school). It is very difficult for parents or teachers to always supervise this. I think the adults should educate the children of safe online behavior but like other real world experiences they need have the independence when being online too.


I agree with you but the inevitable counter argument is that not letting kids have unfettered access to screens and the internet is impractical to the point of comedy. I personally do not buy this argument, and although I do not have a child, my sister has kept my nephew screen-free so far and he's well at the age where kids start to use them independently. What does he do instead? He reads, he plays games (he's allowed some supervised access to video games), he plays sports. Stuff kids used to do and were totally fine.

The other argument is that this makes kids socially isolated from their peers, because they all have and use these devices. If that truly is the case there definitely still is a way to monitor your kids device and internet consumption without giving them free-reign. There are internet safety settings for parents on every device out there. Kids are perfectly capable of communicating with each other through a variety of mediums, they don't need tiktok (or whatever app is the trendy one of the decade).

The last argument, when you present the former argument, is that kids are clever and will find a way to get around controls. Yea, no shit, that's what kids do. Your role as a parent is to monitor that and teach/correct them.


Yeah I'm not talking about no access ever, I'm talking about the (very sad imho) situation where you see toddlers mindlessly staring at their tablet watching random videos/tiktoks while ignored by parents for hours, even if the mom or dad is nearby and and could be interacting with them.


No. The parents are responsible for their children, not tech firms or anyone else.


- Ofcom sets out more than 40 practical steps that services must take to keep children safer - Sites and apps must introduce robust age-checks to prevent children seeing harmful content such as suicide, self-harm and pornography - Harmful material must be filtered out or downranked in recommended content


What if we just ban all recommendation systems?


You'll have to be more specific... Does that ban cover everything down to alphabetical ordering?


If the user did not actively specify what they want to search for or subscribed to a content source, it is banned.


Just forbid recommendations altogether. Let the user subscribe to posts by specific users.


Personally, I am against the idea of adults having to prove their age before being able to access certain types of content - particularly if that means giving their identity. I am not, however, adverse to the idea that big tech companies should be more responsible for what they are serving to youngsters.

Yes, I know their are plenty of tools to allow parents to restrict what sites their children visit, etc... but not all parents are tech savvy enough to be able to set this stuff up, plus you could still allow a child to access Youtube, for example, but then find they are getting unsavoury recommendations from the algorithm.

This made me think about the fact that the major platforms (Alphabet, Amazon, Apple, Meta, and Microsoft) gather enough data on their users that they almost certainly know roughly how old someone is, even if no age has been provided to them. They can use all the signals they have available to provide a score for how certain they are that an individual is, or is not, legally an adult.

(As an example, if you have a credit or debit card in your Google or Apple wallet then you are almost certainly an adult because it would be very difficult for a child to obtain a card and get it into a digital wallet due to the security procedures that are in place.)

Given that, if these companies get forced to discern whether users are adults or not in order to serve appropriate content then it seems a no brainer for them to provide free age verification as well.

My vision would be for the UK government to provide an anonymised age verification router service. When a website requires you to verify your age in order to access some particular content it could ask you which age verification service you wish to use. It then sends a request to the government "middleman" that includes only the URL of the verification service. The router forwards the request anonymously to the specified server (no ip address logs are stored). If you are logged in to the account already then it will immediately return true or false to verify that you are or are not an adult. If you are not logged in then you will be prompted to login to your account with the service and then it will return the answer. The government server will then return the answer to the original website.

That way, we can get free, anonymous verification.

I'm sure people will have issues with this idea, such as "do you trust the government server to not log details fo your request instead of being anonymous?" - to which I do not have a definitive answer, but I feel like it is potentially a little better than having Google or Facebook knowing what sites I am visiting that need verification.

Anyone out there have any thoughts on this? I have only just had the idea pop into my head, so no serious thought has gone into it. There are probably issues that I have not thought about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: