The engine wants people to spend as much time as possible on YouTube.
Netflix’s wants the same, even though it’s not even ad based!
Facebook wants the same.
This is just addiction peddling. Nothing more. I think we have no idea how much damage this is doing to us. It’s as if someone invented cocaine for the first time and we have no social norms or legal framework to confront it.
One thing you can never get back is time. It runs one way and as of this writing, we all have a finite amount of it.
Conspiracy videos are the least of the problem.
I’m clearly addicted, but does HN have any sort of algorithm designed to maximize my time on it?
I’m sure these companies do everything in their power to make their services more addictive, however we have to consider that this is just looking for a scapegoat.
"Like email, social news sites can be dangerously addictive. So the latest version of Hacker News has a feature to let you limit your use of the site. There are three new fields in your profile, noprocrast, maxvisit, and minaway. (You can edit your profile by clicking on your username.) Noprocrast is turned off by default. If you turn it on by setting it to "yes," you'll only be allowed to visit the site for maxvisit minutes at a time, with gaps of minaway minutes in between. The defaults are 20 and 180, which would let you view the site for 20 minutes at a time, and then not allow you back in for 3 hours. You can override noprocrast if you want, in which case your visit clock starts over at zero."
Go look at something such as a phpbb site where topic ordering is driven solely by 'bumps' - the most recent topic to have a comment gets bumped to the top of the page listing. You'll find they rarely have any worse ordering of content than voting systems provide. In my opinion it tends to be simply superior. It also avoids the huge problem in point based systems of requiring some 'parallel universe' /new stream that only a tiny and biased minority ever view.
But voting systems drive "engagement." One intuitively surprising discovery is that downvoting users tends to drive them to comment more, frequently with lower quality posts following. I say 'intuitively surprising' because it seems counter intuitive, yet if you've spent any time on point driven message boards, including here, you see the constant stream of little tit-for-tats as a downvoted user (or two) engage in an ever lower quality of 'debate' in some isolated thread. As for upvotes it goes without saying that that little micro-shot of dopamine is what's built, and arguably sustaining, the social platforms of today.
As for bump-ordering, I lost most of my youth to phpbb-like forums, and I'd say they foster a mindset of even more obsessive interaction because not only do non-nested threads become impossible to follow if you're away for long enough, but keeping your preferred topics on the front page requires frequent monitoring and carefully-timed comments.
You mean like this: https://news.ycombinator.com/item?id=19129671 which directly links to the 'parent' of your comment and is formatted almost exactly like a 'Ask HN' post?
HN, like reddit, optimizes for engagement using the post ranking algorithm.
that upvotes are visible to the user is a big one. Hackernews (or reddit for that matter, which employs a similar mechanism), would work just as well for the purpose of conversation if your score was hidden.
But seeing that number next to your name climb up is something that drives a lot of engagement on the platform. I don't think this is intentional on HN's part, but on Reddit for example I have no doubt that the activity generated by the 'karma' mechanism is left in place precisely because it generates so much (meaningless) content.
Even email can be habitual. Stop sending them and - sans spam and newletters - you stop getting emails.
It's too easy to be addict today.
I think humans spent the 20th Century learning how to live with the availability of unlimited sugar and fat.
I could easily consume 10,000 empty calories per day. But instead I’m eating a bag of fresh organic vegetables as I write this because I learned to resist my biological urges to gorge myself on foods designed to trigger them like chips, cookies and soda.
I think people in the 21st Century need to learn how live with the availability of unlimited information. To consume more of the mental equivalent of organic vegetables and less of the mental equivalent of soda.
Adults are just children who grew up. There's still no reason to believe they will just "learn" on their own.
Instead of favorably comparing yourself, with your bag of organic vegetables (my imagination is running wild picturing what that must look like) and your superior intelligence which led to self-learning how to escape sugar and fat, to the rest of the American populace who clearly need to learn, devote some of your time and expertise to actually making an attempt to educate and help those who don't know better or who are trapped in addiction.
That's why so many professionals are so adamant about sharing their theories on nutrition.
It's just the unfortunate reality that there are also countless special interest groups with particular motives when it comes to spreading disinformation.
(I do share links to studies I find interesting, but I don't really believe I can evaluate them and know which ones will hold up.)
I do think it takes a certain level of expertise to start making claims about things like Keto without just repeating something you heard elsewhere, but a diet with lots of vegetables is undeniably healthy.
Don't be afraid to share the knowledge you feel confident in. Just hold back on the stuff in which you aren't confident. :)
If you're feeling confident about something without having done your due diligence, that's a separate issue.
The effects of binging television are less clear, especially when it’s accepted in advertising that binging television is OK. Look at all the Comcast ads that celebrate binge watching.
At least with food advertising it’s been tamped down a bit. Lays potato chips used to have “bet you can’t eat just one” like it was some sort of a drug pusher.
I think it's more insidious than that -at least the ads I've noticed.
There's a lot of normalizing of binge watching. Of making it a "given". When I say "normalizing" I mean things like "Since you're going to be gorging on your favorite shows, use our phone/our service/our devices to do them on the because they're the best".
It's not even a question in many ads; the assumption is that you're going to be binging already. Celebrating would be closer to beer commercials ("it's been a hard week, now it's netflix time!").
When did binge ever become appropriate to use as something to be encouraged?Feels insane. I mean even binge eating vegetables probably shows a problem, it should just be part of a balanced diet.
The problem was that pathological/excessive users were overly skewing the recommendations algorithms. These users tend to watch things that might be unhealthy in various ways, which then tend to get over-promoted, and lead to the creation of more content in that vein. Not a good cycle to encourage.
Netflix may have this problem, but FB, YT and other (relatively) open access platforms definitely suffer from it to varying degrees.
Like the author, I do find it refreshing that at least some steps are being taken to identify and remediate. A lot more can be done.
Recommendations are simply a service that makes my life easier and potentially gives me something cool to watch (they don't usually work that good, but that's not the argument here).
But it's not actually true that they're not ad based: They follow the great new strategy of native advertising.
I’m the kind of person who avoids trailers and on-the-next-episode-of spoilers, and Netflix assaults you with both of these things. The only way to turn it off is to leave.
Would you treat your friends like that ? Say, a friend asks you to recommend them a good video on software development. Are you going to optimize your recommendation as to make them take the longest time possible to achieve their goal ?
No, I'd link them to relevant videos that helped me in the past, and then maybe link them some more later on to help them further increase their skills after learning the basics. These hypothetical videos were interesting and helpful to me, and I figure that since I found them interesting maybe my friend will too. That's what it sounds like YouTube is doing, only they get money.
Employable skill = bad
Travel = bad
Family = bad
activism = bad
traditional news media = good
entertainment = good
video games = really really good
YouTube most defenitley does not optimize for videos I “might like”
If Netflix wants people to spend more time on Netflix, they should probably stop adding forehead-slapping "features" like mandatory autoplaying previews that encourage people to stop spending so much time surfing through the back catalog.
Of course, it's vitally important to Netflix's business model that you don't discover just how limited their catalog is. That's not an issue for YouTube, and it's interesting to consider how these requirements feed into UX engineering at the two companies.
It’s why HBO is not even a serious competitor to Netflix, even though they tend to focus more on quality instead of quantity.
It’s all about taking your time. Netflix even says their main competitor is something like Fortnite, which again, consumes a lot of your time.
Sometimes they're hilariously entertaining. Most of the time they're so off the mark that I don't watch them. If Youtube really wanted to increase my engagement they would stop showing irrelevant ads and stop crippling the free YouTube app. I mostly go to YouTube for movie trailers and videos from Deep learning courses. That's it, the end.
What YouTube appears to want is for me to pay to use it and it gets in my way and makes YouTube inconvenient in every possible way to try and persuade me to throw money at it. This will never happen but hey keep on trying.
I owned a book one time that had such a bad ending I didn't read it.
“Pemberton’s French Wine Coca Nerve Tonic,” the precursor to Coca-Cola was a patent medicine and was proported to cure morphine addiction (of which Pemberton suffered from after the U.S. Civil War).
The addictive affects of the coca leaf weren’t widely accepted at the time due to the limited concentration of active cocaine in the “wines” it was typically included with. That it had addictive qualities was a side-effect, not a conscious decision to aid with repeat business.
Coke was pretty good about getting it out of their product when people realized it had bad effects- before any serious public discontent over cocaine was popular
You can't really accuse Netflix of the same.
> This is just addiction peddling.
In light of this problem of the radicalising of vulnerable people like is happening on YouTube, your use of the word 'just' in this way, shows that you are either ignorant to the problem or have an hidden agenda in trying to downplay it. I hope it is the former.
> your use of the word 'just' in this way,
When using 'just' in this way you down-play the problems. 'Hidden agenda' might have been a bit extreme on my part. What I had in mind was the down-playing of issues to seem smarter than others.
In the context of the Cumex fraud you could say "It is just stealing". When doing this you project that we shouldn't be concerned, it is _just_ stealing, which it isn't.
It becomes counter productive to solving the issues because when someone who doesn't know about the problems surrounding YouTube's algorithms and reads that 'it is just "addiction peddling"' they might conclude that people are overreacting.
My issue here is that, in YouTube's case, this is not 'just' attention peddling. I also don't agree with you that this is 'a logical consequence of a system that optimises for maximum engagement'. Netflix, as we have established, also wants my attention, but that doesn't mean that I go from Rick and Morty to Nazi propaganda. On YouTube you can go from listening to music to getting lectured by White Supremacists without actively seeking this extreme content out.
They have optimised their algorithm to make this transition very gradual in order for you to stay engaged along the way and in effect they have created a mechanic in which people slowly get radicalised.
It is very easy to down-play this by saying this is logical and just attention peddling, but I'd say it is closer to psychological abuse of people in their most private and vulnerable moments. All this _just_ to make money from advertisement. The algorithms might not have started out this way, but there has been enough criticism on what is happening for YouTube to finally do something about it, yet they haven't. This means it is intentional.
In this context, putting "just" before it can be seen as a way to remove focus from the detail that someone else is focus on to look at the bigger problem. That is what I see here as well.
Your conclusion that it is intentional is too fast. There is a less evil interpretation of real life, and that is that it's not practical for them to implement this in a way that still yields addicted users while avoiding radicalised users. Their money is more important than a problem that a minority is vocal about; it's an unfortunate consequence, but that doesn't make it the intent.
Couch potato-ising people is terrible for our future too. And dismantles defenses of people who would otherwise laugh off the political indoctrination content in other media.
My fellows :
- firefox block all trackers
- ghostery block all
- remove all notifications
- remove facebook
- do not engage with social media except professionally
- stick to long read forms of entretainment (HN enters this category with no video nor images)
my only curse is whatsapp that i use both personally and professionally
> Remove all notifications
Including email? There are plenty of legitimate uses for notifications.
Otherwise, solid list and I especially like your last bullet point, but don't discount the amount of information that can be gained from a good video or infographic.
The people at work who have chat and email notifications turned on just spend their days reacting to whatever comes in.
I find that if something is urgent enough, somebody will call me or come to my office to get my attention.
The recommendation engine only turns evil when it takes the viewer down a never-ending path into dark, negative, useless material and misinformation.
Adults are citizens that have freedom to choose. That is what being an adult is about. That's why kids need to be harbored from certain things. They lack the experience and education to choose wisely.
Putting that burden on the state or a website is losing the bigger picture. Any time I hear the word addiction it makes me sad people are saying they can't make an adult choice in their lives and need someone to stop them.
In the 70s - as a reaction to the club of rome, they had this insane idea to freeze part of humanity, until the imminent crisis was over. This is what the internet is- a freezer- it freezes social interaction, it freezes your desires, your plans - it freezes everything.
To become defrosted, does not by default improve the world. Its ultimatly a personal desicion.
> One view of the status quo is that media companies are aggregating human attention and selling it at a discount–far below minimum wage–to advertisers in a massive arbitrage on human capital.
It brings to mind the greed and stupidity of a lord thinking that anything his subjects do which isn't earning him money is stealing from him.
Definitely. Unfortunately the appeal to emotion and rare cases are parroted by those that feel they do know.
> It’s as if someone invented cocaine for the first time and we have no social norms or legal framework to confront it.
Or airplanes or telephones or the printed word.
It's very difficult to contextualize and adapt to reading these short incremental bursts of text or 'Twitter threads'. It always feels like important context and exposition is missing. I seem to understand the gist, that Youtube seems to have promoted flat earth videos disproportionately, but the 'whistleblower' aspect is not immediately apparent.
The second sentence is "I worked on the AI that promoted them by the billions." and he then goes on to discuss the internal workings of the algorithm. That feels like the actions of a whistleblower to me.
The numbers in the middle of the text and the un-expanded short URLs are annoying, but I'm struggling to see what else about this is difficult to follow. I actually think Twitter itself is better in this regard than this threading service, which tries to hide complexity but just ends up causing confusion.
Basically it's tailored to push your emotional button.
>(If it decreases between 1B and 10B views on such content, and if we assume one person falling for it each 100,000 views, it will prevent 10,000 to 100,000 "falls") 14/
What does this sentence even mean? It's very hard to parse to the point of being incoherent.
The fact that people have gone from writing long-form essays to "twitter threads" to argue something is tragic.
Yes, although I think it accurately reflects the level of rigor that the authors apply in their thinking (and by that I mean 'not a lot').
In fact, the thread on Twitter itself is even easier to read than this ThreaderApp link that some people demand.
The incessant bloviating from people who refuse to learn how to use Twitter is far more annoying tbh, it's starting to sound like old people complaining about the music being too loud, and it always derails discussion.
It's just kind of a mess.
I was a heavy user of Twitter for multiple years. I've pulled back on it lately and it's really amazing how much I don't miss the weird, telegraphic kind of writing it tends to encourage. It really bugs me that one of our major means of communication forces you to filter everything through tiny text boxes that only recently became big enough for a whole sentence, and strongly discourages taking time to actually consider the flow of an idea through multiple paragraphs.
I can't see how people use Twitter for anything more than shouting into the ether at each other. It's just so messy and rambly if you try.
Then my life flashed in front of my eyes (by lack of better words) and I recalled a million lengthy conversations that pretty much made me who I am.
There was a funny video with a professor and a twitter "expert" where the professor argued it bad. The twitter fanboy kept interrupting him half way his first sentence until he got angry and asked if he could say something now. The twitter guy then said: But I already know what you were going to say. I laughed so hard. The conditioning clipped up his mind into 10 sec attention bursts then he had to talk to himself out loud again. Nothing of interest was said in the interview. The twitter guy thanked him and said it was a wonderful conversation. The professor frowned silently and looked at him from the corner of his eye. It was the best "what a fucking moron" face I have ever seen.
This just adds to the general uselessness of the recommendation feature - I've seen that video five times, it's good but could you offer me something else?
Nevertheless, the whistleblower is telling us that _your experience_ has less of an impact on the recommendation engine than anti-social obsessives.
Another gripe I have is that for creators who run multiple channels (and don't get out of their way to cross-promote), it is often very easy to not realise this until you manually check out their upload page and realise you missed out a lot of their content, all while the recommendation engine pushes the same couple of videos I have already watched over and over again.
The only occasion things clicked for me is when I was researching the Pitcairn island. For a few days there was a drip feed of obscure but highly relevant videos that keyword search failed to pick up. One day the stream stopped suddenly and has never returned since.
That being said, yeah the recommendation feature is just terrible
The recommendation engine is awful for me. It doesn't make me want to spend time on YouTube. It's amazing that at the same time it seems to learn how to get people addicted.
I think YouTube have gone too far the other way.
Everyone carries a set of basic assumptions about how the human system works, like what's acceptable public behavior, how the government should be run (which affects who you vote for), expectations about how certain people will act, etc.
These assumptions are based on the information we take in each day, like articles read, images viewed/scrolled past, and so on. We ARE our media diets, whether we want to admit it or not.
Our opinions are formed slowly, they change slowly, and there are several well known "bugs" in human thought patterns that make us tend to prefer an echo chamber and reject information that doesn't line up with held beliefs.
AI enables fine-tuned control of exactly what assumptions people gain and maintain. What happens if YouTube silently starts recommending PragerU videos to all the millions of high schoolers that match the "impressionable" profile? What would that do to the basic expectations of the citizenry about what the "right" kind of government and tax system is?
What if the platform was used to convince everyone that annual slavery reparations must be made in perpetuity?
No one wants to admit that ads work on them, but we all know they work in general. So how much of what you know is true, was fed to you? In a world where everything we consume runs through relatively black-box algorithms (black-box to us outsiders, anyway), how much of our knowledge and beliefs are our own?
Guess I'm getting myself down the rabbit hole here. There's no easy answers anywhere. I think it's only a matter of time until we get our own Ozymandias.
As someone who very rarely aligns with any of Dennis Prager's political dogma, that would be so much better than the "Top 10 Reasons The Earth Might Actually Be Flat" type that dominates reccomendations now, albiet less profitable.
We should of course expect YouTube to behave in a way that prioratizes the benefits to them financialy, etc.
Like you said there's no easy answers anywhere. Understanding that all of YouTube's reccomendations are effectivey silent, programitic , and optimized for profits that get calculated on industry stardard, necessarily shallow engagement metrics like clicks and views is a helpful start but seemingly uncommon knowledge.
The reality is that almost all online platforms are driven by user behavior. If those user behaviors lead to relationships between videos that drive recommendations, then that should be respected. Who is YouTube to say that's bad? However, if the quality of recommendations is skewed by one group of heavy users (i.e. it is being gamed), they could consider tweaking how much weight is given to such users and apply it across the board. That is, they need to apply such changes not just to 'conspiracy theory' videos but also to things like political videos from the left (and the right), which surely experience the same skew in recommendations from a minority of users.
An aside: some comments here seem to say that YouTube should keep this content but 'stop promoting it'. Well not recommending content is in effect the same as censoring it outright. If an order of magnitude fewer users find some content, the net impact is the same.
By changing their recommendation algorithms, they can change the way a whole society thinks! Just look at the recommendation from the blog: "Recommend more round-earth videos".
So far, this extreme political power has been used without the platforms managing it, but it would only make sense for all of those platforms to use it in their advantage.
Right now it's just a tremendous black box, although it works well most of the time ime.
Eh, the connection is tenuous. Sure ai becomes biased by the users who use it the most, but then you really still have to stop and all yourself: why is it we are taking about conspiracy videos instead of how people spend millions of hours watching cat videos, or memes?
There is a lot to say about the topic, but the answer really boils down to the fact the recommender systems are just good at working with variables it has access to. I don't know of any AI that has access to: "this person murdered someone" as an input. So literally, how should an AI be expected to optimize against that?
Manually tweaking the AI based on things in the real world makes sense, I'd just wish the author didn't have have it and mention Tay which is not really the same issue at all. Tay is an instance of deliberate manipulation, but YouTube is a case of wrong incentives.
I think it near-certain that YouTube is both. Considering the obvious value of getting your video into as many people's recommendation streams as possible, there's surely a whole "SEO, but for YouTube" industry out there, working furiously to game the system.
Anecdotally (albeit not "an industry" per se), on a number of occasions I've observed... let's call them "radical activist groups", organising off-YouTube to flood a favoured video or channel with activity, with the explicit goal of causing YouTube to make it more visible to "the masses".
There's an easy fix. Exclude the hardcore users (the top 10% say) from your recommendation engine.
It would probably lead to some interesting delayed feedback effects where things that are nearly popular get recommended, push out the popular, then the displaced stuff gets pushed into the nearly popular zone, etc. Perhaps some popularity + cooldown system would work.
Of course, all of this goes against the short-term incentives of maximizing engagement, so there's no way YouTube will ever implement this (even though it would probably greatly improve diversification of content).
The system I thought you suggested in my misreading seems complimentary to your solution though.
Edit: For example. YouTube could have a feature that would let me filter out phrases like “top 10” or “amazed” from results.
Because we're begging for oversight. It is clear reading comments and articles day after day. We also highlighted alcoholics, communists, druggies, terrorists, etc in the past. The vast majority of people that, say, flew airplanes without consequence didn't matter anymore. We take the ones that validate our assumptions, appeal to emotion, and get oversight. We, the meager, need protection from the bogeymen.
All their life, they were exposed to newspapers, magazines. Suddenly their habit changed and now they were getting news from the newspaper on the internet. But it was always true or close to be true, nothing was entirely made up.
And then one day, they started getting news from friends on Whatsapp. Links from Youtube shared to them. There was never any change, it's still the internet giving them news. It never occurred to them that some what they were watching was entirely made up.
I remember my wife's mom telling us that Steve Jobs launched the iPad on Dragons Den. Had no idea what she was talking about and then she showed us a doctored video of Steve Jobs at Dragons Dens, showing the iPad to everyone. She thought it was true. And we told her it was made up. And she couldn't understand why someone would spend time creating something false on the internet. She just couldn't understand it.
Neither of these is new. The scale and velocity is different, in a meaningful way, but "the internet" hasn't changed fundamental human behaviors there.
I wonder if it may be our fault somehow by telling them that not everything on the internet is crap.
For some people, the bar is a little low.
Yeah, the dumb ones do. That’s not because of the generation they grew up in, it’s because there are dumb people in every generation. There are just as many people in the newest generations who believe anything that comes from the mouth of an authority figure.
I am not sure of the solution, at all.
Another good example of resisting the tyranny of metrics is Meetup trying to avoid a runaway feedback loop in its recommendation system, as described by @estola https://www.youtube.com/watch?v=MqoRzNhrTnQ*
youtube is dangerous; just like netflix and facebook, they're designed to get people to stay on for as long as possible and obviously children are very susceptible to the design
Seems like this is just an updated version of what people have been doing ever since the computer was invented.
'The French statesman Malesherbes railed against the fashion for getting news from the printed page, arguing that it socially isolated readers and detracted from the spiritually uplifting group practice of getting news from the pulpit. A hundred years later, as literacy became essential and schools were widely introduced, the curmudgeons turned against education for being unnatural and a risk to mental health. An 1883 article in the weekly medical journal the Sanitarian argued that schools “exhaust the children’s brains and nervous systems with complex and multiple studies, and ruin their bodies by protracted imprisonment.” Meanwhile, excessive study was considered a leading cause of madness by the medical community.'
' In 1936, the music magazine the Gramophone reported that children had “developed the habit of dividing attention between the humdrum preparation of their school assignments and the compelling excitement of the loudspeaker” and described how the radio programs were disturbing the balance of their excitable minds. '
I too have found myself watching a tech talk on youtube, and if the dopamine hits aren't coming fast enough, I'll reflexively open up a web browser and start surfing, or vice-versa. I've become aware, and try to stop it, but it is an urge for sure. I didn't do this a few years ago.
YouTube does not. YouTube just cares about what gets people to watch and watch and watch. YouTube will happily feed you endless rants about white ethnostates if that's what meets its criteria for "creating engagement".
What clouds this somewhat is that he spends about half the time on YouTube watching videos about his favorite games, where he learns more about them. There is just an addictive quality that I don’t like about YouTube.
Edit: power tools to be disassembled, not so much used
I don't doubt this but it's unclear to me how this is much different this is than many other systems. I think there are a _lot_ of systems that have an operating point defined by a small group of hyperactive users. e.g. The vast majority of twitter / reddit / probably even hacker news content is created by a tiny % of users. Even if you move past the content creation phase, curation is often also controlled by a small group of hyperactive users. e.g. The relatively small number of people that switch to the 'New' view on HN or reddit whom collectively decide what gets moved onto the front page where the masses will see it. It seems to me that an AI approach could be more 'fair' or 'democratic' in comparison to many of these existing systems in that it should consider the patterns of all users in some way, vs just relying on a small self selecting group to perform the same function.
To wrap back to this YouTube situation, I think the deeper question here is why we seem to have so many people willing to believe anything they see on YouTube? More concerning I guess is the general idea that if we hide the offending content that is a 'solution'. It's really just putting a bandaid on a deeper problem and hoping it goes away.
The Facebook content filtering end of year 2018 report talks about engagement going up the closer to a policy line (any policy - hate speech, nudity etc) content is. And that applies even if you shift the policy line.
It says they are going to change the Facebook Newsfeed to downgrade content which is near to a policy line, with a reverse curve to the extra engagement it gets for being close.
I hope the YouTube change is going to be similarly fundamental. Really curious what it is and hope it is a good one.
The harmful effects may only be becoming apparent now. I for one never thought about this before.
I know it’s the exact opposite of a good monitization scheme but it’d be a good step in putting the power back back in the hands of the user or parent of the user.
That warship wasn't actually bombed by the enemy, they probably didn't use those chemical weapons, why is our state department funding the same terrorists we were just fighting yesterday, actually those moderate rebels aren't so moderate will become the new flat earth.
If so, this is is a misapprehension. Even unsupervised learning systems are subject to data based biasing. The selection of input, the decisions on how to partition the dataset, the decisions made on how to judge and measure overfitting, and potentially hundreds of other small hyperparameter decisions made by operators can substantially change the output of the system. In the case of the top post, it's very clear that "does this increase time-on-site" was an operator-chosen scoring function for the results of the system in question.
Frighteningly, these decisions are often unexamined and unrecorded. This has lead to a series of impressive-sounding systems that can neither be reproduced nor truly audited. 
And that's just the approximated function. The tensors it outputs are then further processed in some way, as they're often of no value in isolation. The decisions on how to use those outputs also have a profound impact on how we view the results of learning systems.
If anything, we're not cautious enough as a society about using this technology. We see all sorts of crazy in the news that's wrapped up in the over-hype. We see law enforcement failing to use even basic tools  because they're so excited.
From the linked article:
> “We designed YT’s AI to increase the time people spend online, because it leads to more ads.”
There's a 3rd. It's complex, difficult (to execute and prove), and hardly ever pushed. Put the onus on individuals to be realistic, honest, self aware, and diligent. Sure, it's not easy, not clean, and will likely lose more than it wins in the short term. But ignoring it completely is just...dangerous. Continuing to ask Daddy to change the world for you breeds less-than-desirable attitudes, beliefs, and perceptions. This concerns me more than any Internet video does.
Second, how do you plan to do this? Put a banner on every Youtube page that says, "Don't trust this video"? What we're doing now clearly isn't working, and I don't know what you propose to do that we aren't already doing.
Most normally agrees that wide spread, accessible, quality education is a key part of having a democratic, free and wealthy society.
We've started to see some online platforms doing a sort of reverse education. What is that?
It seems the idea itself of education comes from the philosophy of enlightenment.
I guess education is always the promotion of ideas and instructions. So in this case, we are seeing that the ideas and instructions that lead to democracy are not being educated properly, and other ideas are being promoted more heavily.
So the question is, what ideas and instructions should we educate people on? Should the algorithms be biased on purpose towards those? Should it be ideas of enlightenment emphasizing reason and individualism rather than tradition?
In this way, should education be regulated? Like should society protect itself against improper education? Who defines improper?
YouTube is proving itself to be a great educator. Very effective at it, but it seems its teachings have been controversial. They're not the teachings that lead to our free democracy in the first place.
Personally, I think we need to protect what we achieved, and we have a duty to control education so that it reflects the ideas and instructions that worked well in the past to get us where we are in terms of democracy, freedom, wealth, human rights, etc. But I also recognise that whoever you give this power too could abuse it. Not an easy problem we're facing.
The internet is full of bullshit, so is the rest of the world.
If you walk down the street and do as every sign, store, person or organisation desires of you, you’re going to run out of time and money before you make it more than a couple of blocks.
You will live at the whims of those who are determined to influence you.
Why would we ban conspiracy theory videos when we have churches in every town? Some ideas require proof and others dont?
is that surprising? why would anyone want to see a video about how the earth is round. conspiracy video is much more interesting, even if you don’t believe the conspiracy
The AI can work by selecting the right "warning" video for a given audience. Telling this to a 65 y/o is not the same that saying it to a 20 y/o.
To be clear, I don't want to turn this into a debate of left versus right or about what is fringe and what isn't. If there is a problem with the quality of recommendations on YouTube, I believe it exists more broadly and not just with 'conspiracy theory' content. My point therefore is to question whether this recent noise about YouTube recommendations is really about the algorithms or if it is just about the power dynamics between groups of people who have different opinions and want to block ideas they disagree with personally.
After all, isn't it just a vocal minority of hyperactive Internet users who are complaining about YouTube's recommendations and fomenting Internet outrage about it in the first place?
Ultimately, whose fault is it that a user watches a "conspiracy" video? The article seems to consider the user base as mindless people with their mouths open to be spoon-fed content. Do the viewers have no agency?
In that thread, why is the solution to curtail "conspiracy" videos? What defines a conspiracy theory? I mean, we all seem to agree here about vaccines and moon landing, but what's to stop YouTube from labeling a political party as "conspiracy" and filtering them out of existence? Can't the users choose for themselves, or are they too "stupid" to pick what's best for them?
Yes, YouTube is free to filter as they see fit... but if they want a recommendation engine, AND they want to filter out "bad things", it's a very valid question to ask who decides what is bad. I believe it is essential for YouTube to be transparent about what their algorithms are designed to filter out to avoid this scenario from happening.
Part of the the solution that no one seems to be talking about is to caution others against passive consumption of media. YouTube et al want users to plug their ears, close their eyes, tilt their heads back, and consume. YouTube can condition their streams to be as "healthy" as they want, for whatever definition of "healthy" YouTube chooses--but there is no way to healthily allow yourself to be fed a diet of things you don't pick, even if it looks good.
"Forcing" YouTube to do this will simply result in the lame, ineffective warnings you see at casinos. "Remember! Mindlessly consume responsibly! If you need help, uh... throw away your computer I guess!" This needs to be done on a human level, thinkers to parents, parent to children. Not a legislative level.
I think nearly everything in life gets biased by tiny groups of hyper-active users. For example, one story about a razor blade in an apple, published on the news for a week, leads many to think trick-or-treating is now a dangerous activity.
For me, a key thing is to limit exposure to ALL media. It's ALL biased in one way or another. Learn to trust your actual experience as much or more than what you read and hear from others.
Now I simply disable YouTube’s watch and search history and almost never care about what’s on the homepage.
And they're not going for every conspiracy, mostly flatearth, antivax stuff (hopefully)
so assume youd be fine them doing that for say... elizabeth warrens videos but not donald trumps? Because, "hey its not censorship"
These are Dawkin's memetic viruses being spread.
BTW, why does he post Youtube conspiracy links? He should at least suggest using incognito/private!
People do the same thing too with chatbots. If you have a text file with a list of bad words the bot isn’t allowed to say, it’s not really cutting-edge AI is it.
This raises the question is it really YouTube's job to stop crazy? Where is the responsibility of the viewer?
This is how we get teenagers having to vaccinate themselves because their parents were too stupid to fall for the antivax BS
Because as inconsequential for everyday life (in general) if man stepped on the moon or if the Earth is a cylinder, it teaches people that "it's ok to ignore those nerds that say otherwise, whatever you believe that's ok"
"Let's give stupidity a chance"? Sorry, no
Will this happen? Well, will YouTube prefer to make less money, just because you want your son or your coworker to be less 'derailed'?
Well, not a minority by far. If 10% of US population believes in "conspiracy theory A, B, or C," that already makes it into one of the biggest coherent, idea based social groups, and it's only natural that their statistics got locked on the signal from such a big group.
If picking up of major "psychocultural" groups is what it was designed for, it does its job excellently; and for the very same reason, it makes spread of any materiel catering to such groups very easy.
That also explains why its targeting seemingly overlaps with demographic being targeted by foreign propaganda - propaganda designed for a certain effect is always targeted at big well definable demographic groups. An efficient propaganda is a message that predictably causes the same desired effect on as many people as possible.
What if YouTube... didn't do that? It's already enormously successful, and could just show you other shows by the same user in the post-roll. Or videos in the same category.
YT pushed changes to optimise view numbers at absolutely any cost, because they could. But that doesn't mean they have to.
Why does the platform facilitate the branding of ignorance?
Blaming the viewer is ridiculous — repetition of propaganda is dangerous and eventually changes people’s minds. It’s unfortunate that an amazing educational and entertainment resource is ruined by dumb policy — I don’t feel it’s appropriate for my son to use YouTube independently, as it always leads you to a rabbit hole of crazy through the recommendation engine. Always, because ignorance is engaging.
> "I worked on the AI that promoted them"