I find a lot of sites feel like they're overtuning their recommendation engines, to the detriment of using the site. YouTube is particularly bad for this - given the years of history and somewhat regular viewing of the site, I feel like it should have a relatively good idea of what I'm interested in. Instead, the YouTube homepage seems myopically focused on the last 5-10 videos I watched.
The problem is the economics of the Internet today. Most sites are ad-funded. They need to maximize pageviews and time on site. The recommender is a huge part of accomplishing that, and it simply won't be tuned to any metric other than maximizing revenue. The solution would be to reskin popular sites with recommenders that had other objectives, such as retrospective satisfaction with time spent. Unfortunately sites know that if they give up control of the UI through an API that is open, they will lose money when people do things like this.
I really think we're entering a crisis here where sites and apps that are engineered to maximize corporate metrics are leading people down horrible paths of addiction, and psychic stress as they spend their mental energy resisting temptations constantly thrown at them.
Thought just occurred: would I be willing to pay a subscription to have the recommender tuned to remove revenue maximisation and site-addiction maximisation? Would anyone?
I've made plenty of "remove ads" in-app purchases on my phone. This isn't too different. And it might actually result in a truly useful experience.
There may be public relations problem - users these days understand that ads pay the bills and don't see them as a moral compromise but try to explain to your users that they can pay premium to get relevant recommendations instead of "spam".
I've said the same thing many many times, I can't remember the numbers but the monetization of facebook data per person (If I remember correctly from something I read a year back, I'll try to look and add it to this comment) is less than 20$!
I would be very very willing to pay for Facebook to simply not be tracked, especially at such a reasonable price. I know that won't happen, but I'd really love such a thing.
It's only so low, because those are averages. The people willing to pay the twenty bucks might be exactly the ones that drive up the average, so they don't want to lose them.
That incentivizes using adblockers though, and given that ads are pretty much YouTube's entire business model, that would be nice for you but terrible for the business.
However, they do charge money per month for YouTube Red which removes ads, and a better recommendation system for Red subscribers might encourage people to buy it.
I'm not sure if it is because of weird tastes or something else. But for sites like Steam, YT, Netflix I always wish for more tuning parameters because their recommendations are all horrible.
On Netflix, for example, I get recommended stuff that's similar to other things I didn't like.
Amazon is often the best of them for books because their engine seems to only really value the last few things I looked at / bought and so just recommends really similar stuff. Of course, that fails hard whenever I look at something I'm not interested in for whatever reason.
On Steam I wish they had some elaborate filtering system, there are so many games, I need something like NOT(Adventure+Puzzle) BOOST(RPG), I need to combine tags, not just filter them out by themselves.
> for sites like Steam, YT, Netflix I always wish for more tuning parameters because their recommendations are all horrible.
The problem with those recommendation engines is that they're not optimized to serve your needs. They're optimized to serve the goals of their respective companies. And the problem comes when your needs are a little incompatible with the needs of the company.
Consider Netflix for example. Their recommendations don't seem to care much about what you actually enjoy the most. They pay different amounts of money to let you watch different content. So your goal of watching the thing you would enjoy the most is different from whatever the hell their goals are-- probably to get you to watch just enough Netflix to make you not want to cancel your subscription, but not run up their bills on bandwidth and licensing fees.
I'm absolutely confident Netflix could make amazing recommendations-- and probably already has them internally. But it's not in their best interest to give recommendations that are in your best interest.
To the extent that I'm right about this it could be a market opportunity to make an honest and useful recommendation service.
Netflix has a tiny catalog, so investing in making recommendations from that tiny catalog is mostly wasted. Half of Netflix's business now is House of Cards, so their entire algorithm consists of recommending House of Cards to people who haven't watched it yet, and random content to the others.
> They're optimized to serve the goals of their respective companies. And the problem comes when your needs are a little incompatible with the needs of the company.
Ultimate they are compatible though. And yes I mean in the econ 101 way. This tension should be at least partially resolved via pricing scheme innovation.
Steam would earn more from me if I did not closed it frustrated every time I wanted to buy game. I literally did not bought game because there was no way to filter out stuff I don't like (well defined category in this case).
It's worth noting that satisfaction is also a function of perception. Did you enjoy the movie? Yeah, probably. But then you think about all the possibilities, a small number of which would naturally be better, and an acceptable film fades towards unacceptable.
Regardless of how good the recommendation engine might be, it's still always subject to human perception. That can be unpredictable and fickle, on a good day :)
I once clicked on a 'russian bride' google ad for a laugh, quite a while back. Google still puts lots of russian and filipino bride ads into my 'bubble' to this day...
Not the same person, but I have three. I have a roomba for everyday sweeping, a Bissell for actually cleaning the carpets not just sweeping them, and a Black and Decker handheld for the stairs, the car, and other places it's hard to take a full vacuum or a robot.
My Amazon homepage is full of stuff I already own. Even worse is their internet-wide remarketing makes it so I see stuff I own on many of the sites I visit.
There was however one moment where Amazon got it right: they recommended to me a book that only a week earlier I had purchased on a whim from an independent bookstore with cash. Creepy good.
> There was however one moment where Amazon got it right: they recommended to me a book that only a week earlier I had purchased on a whim from an independent bookstore with cash. Creepy good.
That is creepy. Any way it might've been more than just a coincidence?
It's possible, but it was a book in a relatively unusual niche that I have a hobbyist interest in. It was a book about the archaeological and linguistic study of proto-Indo-European people (PIE) and their origins and spread by means of horses and the, at the time novel, wheels. I had previously purchased a book or two on linguistics from Amazon so perhaps there is only a small set of popular books in that category for their recommendation algorithm to pick from.
I told this story in the recent Amazon recommendations discussion, but I bought a pair of speaker stands from them and before it was even delivered they sent me an email saying "HEY BASED ON YOUR BROWSING HISTORY MAYBE YOU'D LIKE THESE OTHER 10 SPEAKER STANDS!"
wait, wouldn't this cause buyer's remorse? like, if I just finally decided to buy some product ... being bombarded with all the other options suddenly makes me rethink my purchase decision, "oh but what if this other one is more gooder!?!?!"
I think the argument is cognitive dissonance and confirmation bias generally function to make you more happy with the one you bought already when presented with other alternatives.
I thought that cognitive dissonance was the buyer's remorse. I've experienced that a few times in my life. Now I've got no qualms about immediately taking stuff back.
A while ago I took an Ayurveda class (Indian medicine) and part of it was doing an oil enema. There are devices for this that are also used by a certain population for sexual pleasures. Bought one on Amazon and for months I couldn't show my Amazon page to other people because it was filled with sex toys....
My favorite sleeping mask is made by Joy Division, a blindfold sold as a sex toy. When I replace it every 1-2 years (or earlier in case I was on a flight…) my Amazon page is full of sex toys ;)
For books this works pretty well for me as I tend to read genres in bursts. If I just read a scifi books, I'll read a bunch more before going back to fantasy ;)
The equivalent for books would be "you might like this paperback copy or a large print version of the book you just purchased along with 3 other editions!"
It's probably correct by expected value. You're just not the multiple wallet type. I'm sure there are people who'd want five wallets, eight different shoe styles, multiple cuts of pants...
Your indifference is subsumed in the sheer size of others' consumption.
You're right that there are categories of products that this absolutely works. When I buy clothes, books, media... even tech, then I probably will buy something related. But there are some "one-off" purchases that Amazon doesn't need to hound be about.
I have multiple wallets. I have one I take on business trips that contains my work ID, American Express, business cards, etc. All stuff I normally don't carry with me. I have an everyday wallet with cash and my bank card and stuff I need everyday. And if I'm dressing up, I don't want to ruin the lines of my suit so I have a far slimmer wallet that just has my ID and space for one credit card.
I mean I'm not accessorizing with the latest seasonal fashions, but there are utility reasons to change the kind of wallet you're carrying for different purposes.
Netflix, I think, has killed their own recommendation algorithm when they removed stars and made it boolean. I don't know if those buttons even do anything anymore because I think they're just matching based off demographic and who they're trying to market to now. It's recommending shows that I never would watch in a million years and giving them high matches despite me disliking most similar shows just because I'm a 20-something male.
I think Netflix probably did it because people are inherently bad at being objective. For the average person, 2 stars for a movie and 4 for another isn't based on anything measurable, even they couldn't explain. I'm shocked at some of the amazon product reviews, most of which are 5 star reviews even if the product is absolutely terrible. Movies are different than products, but it's the same people doing the reviewing. Remember, the average user is not a thinking analytical HN user. Average people are much better at bool choices.
I know I'm in the minority here, but I am a big fan of the new system. I would torture myself trying to decide between, e.g., 3 or 4 stars for movie. And then go back and re-rate other movies that I realized I liked more but rated lower than the just-rated movie.
Their % match numbers are fairly accurate, but I have had to go into the watch history and delete the occasional movie watched and finished that we actually hated. No number of 1-stars (or thumbs downs) would eradicate its effects on the recommendations.
5 - Absolutely loved it, will buy a disc
4 - Good, but won't buy a disc
3 - Movie was okay
2 - Not a good movie
1 - Stopped watching 20 minutes into it
My problem with binary choice is that 1 == 2 and 3 == 4 == 5, whilst 1 and 5 were very special for me. :(
Plus the scale bias differing vastly between people and cultures makes the data a mess. Like say or me a 5 means 100% perfect, Why discreet choice stuff is all the rage in the market research world. (unless that's changed in last few years)
Asking people "which of these 3 things you like best" vs. "rate these 3 things 1-5" will usually give you much more useful data, plus be easier for respondents.
Popular recommendation algorithm like collaborative filtering by matrix factorization takes into account the accounts for user and item biases (the simplest method is to normalize the ratings of a particular user by the average of ratings of that user).
Couldn't you control for that by weighting people's ratings by the range in which they provide them? Like weighting a 5-star review a bit more from someone who averages 3's than someone whose ratings average 4's? Far from perfect sure but I bet it could save a lot of results from needing to be thrown out.
With stars you can cross compare with others to see if they have the same score. With simple thumbs up recommendations you cannot compare the ratings as the score is whether it appears to you or not.
I have to wonder if Netflix did this because a lot of their original or exclusive content seems to debut to mediocre star ratings. When the new system says "x% match" I assume that value is derived more from genre match or search relevance than whether I'll actually like it or not.
"In addition to the new rating system, Netflix has new match percentages (up to 100%) to more accurately predict how much users will like something.
...
The new rating system received 200% more ratings in A/B tests, according to Netflix VP of Product Todd Yellin.
When it comes to rating movies and shows, stars reflect the preferences that people want to have, rather than how people actually behave. Todd gave the example of users giving 5 stars to a documentary but just 3 stars to an Adam Sandler movie that they watched over and over again. “What you do versus what you say you like are different things,” said Todd."
If Netflix was trying to ensure that what I was mostly likely to watch next had the highest star rating, no wonder they gave up on it. Our opinion of the quality of something is not a good predictor of our likelihood of watching it.
Their users are surely not confused about that. So why does Netflix want to present a prediction as a rating? Is it to flatter their users by telling them that the thing that feels instantly gratifying right now is actually an amazing movie? "Hey, great choice. Billy Madison is a five-star movie. What? Why would you feel bad about not watching Raging Bull instead? It's a two-and-a-half star movie at best."
In other words, Netflix, like Facebook, like Doritos, is engineering itself for maximum addictiveness without regard for honesty. It will shelter you from even what you know and reassure you that whatever triggers a pleasure response in your brain is the best. Relax and enjoy it.
The truth is that we consume easy things a lot more often than we challenging things. It would be exhausting otherwise. But the best things are often the most challenging. We know that, we know that easy movies are just a way to kill time, but Netflix wants to do us the service of helping us forget it, because then we might be 1% more at ease when we watch Netflix and 1% less likely to switch to another service.
>When it comes to rating movies and shows, stars reflect the preferences that people want to have, rather than how people actually behave. Todd gave the example of users giving 5 stars to a documentary but just 3 stars to an Adam Sandler movie that they watched over and over again. “What you do versus what you say you like are different things,” said Todd."
There's a reasonable objection to this behavioral definition of "like", which is that it doesn't actually make people's lives better, it just fills them with more compulsive behavior. It's not necessarily "irrational" to wish you were more patient, or to want to ask Netflix to show you useful things rather than useless fluff. That you occasionally betray your stated goals does not mean you should be denied the right of self-definition.
In other words, what people say they like is more important, to me at least. See eg "Thinking Fast and Slow" by Kahneman.
I can't agree more - I'm a huge proponent for star rating systems. I get that they are perhaps more complicated than a Boolean value, but they help me out personally.
I miss the days of "tap tap scroll four clicks" on the iPod to help me rate new music, specifically.
Thumbs up/down might not be the best for training a recommendation engine, but as someone who just switched his product's rating system from 1-5 to simple up/down, let me tell you: people have no idea how to use a star rating system. I would get people raving but leave a 1 star review, some people would leave a 5 star review and say bad things, some would leave a 1 star review but seem pretty neutral in their review.
All the technology hype aside, it often feels as if these feed prediction algorithms are akin to weather forecasting. That is, there not forecasting per se, but taking educated guesses based on some set of knowns. The problem is correlation is not cause.
Part of that trend is because recommendation algorithms aren't all that interested in making recommendations anymore. They're interested in getting sales or views. To do this, they use dumber algorithms that are easier to understand ("people who bought this also bought...") and over-value recency. They're not helping you find new content, they're trying to prolong the time you're on youtube.
I have been paid to write an algorithm like this. It matched you to relevant content, but then a manually configured weighting table would skew results to more popular sources. It went as far as taking into account the number of leads we had sent each partner that month as a percentage of their quota
Youtube is definitely over-weighing the last few videos. Then again, it's likely optimized for a different market segment. Especially with how many children are given tablets. Like, brief intense fascination with many subjects in general is genuinely hard to optimize for, especially when a huge portion of your market has long-lasting intense fascination with a few subjects.
This is often the behavior I want. I frequently queue up a song on YouTube and then let it keep auto-playing, if I hit one I don't care for I usually select off the sidebar of recommendations.
This only works because YouTube stays focused on what you're doing without trying to build a bigger picture of who you are. It would be nice if it could somehow do both, but I don't have a clear picture of how that UI would function.
YouTube's home page system is quite hackable in a way, but you need to use the search or other recommended videos to change your viewing pattern.
In other words it doesn't consider you as someone with a long history. You can change your profile from a conservative to a liberal in a few hours watching videos. Whether it's possible to have a balanced amount of crazy ( not the same as a centrist) is something I'm currently working on, it requires effort!
YouTube would rather you fall into a trance of continuing your recent viewing habits rather than providing a personalized library. It just makes better business sense.
Because they aren't trying to help you enjoy videos over the long-term, they're trying to keep you from going over to Facebook where their addictive algorithm will take over and reduce your likelihood of returning to YouTube!
It's a classic race-to-the-bottom. If YouTube lets Facebook keep you addicted, you won't get to the long-term.
Yes, but that has an unspoken assumption that recent habits are more additive than longer term ones. I find that suspect; after all, you've already shown that you'll go back again and again to these kinds of videos, whereas you might spend a couple of hours watching a bunch of videos of a kind that you'll never be interested in again.
Personally, I find YouTube terrible at keeping me engaged. Even when I want to keep watching videos, their sidebar fails to show me anything I want to click on at least 1/3 of the times.
Yes, but that has an unspoken assumption that recent habits are more additive than longer term ones
This is too fine a point. A user will "select" a default choice by doing nothing, and it's in YT's interests to provide default choices that reward the company. This has been researched for decades and is supported by, among other things: https://en.wikipedia.org/wiki/Status_quo_bias
Because if e.g. you watch a Tool video, the ad rates for you watching a Korn video after that are higher than if you watch a Zeni Geva one. Alternatively, Kesha > Lady Gaga vs. Kesha > Joni James. Unscientific, but it comports with conventional wisdom with regard to fiduciary duties to stockholders.
Because if e.g. you watch a Tool video, the ad rates for you watching a Korn video after that are higher than if you watch a Zeni Geva one
But why? If I've watched hundreds of Zeni Geva videos over the years, but it's my fifth Tool video, why would I be more likely to click on a ad on a Korn video?
It's not that they think you'll be more likely, it's that they rather you did, so the suggestion emerges. Recommendation engines are a funnel for revenue.
As largely a geeky population, perhaps we do need to accept that our interests are more varied and often actually more intense than those of the average person. Being mildly amused by every fart joke movie available for some people is more rewarding than watching an emotionally challenging drama, a thoughtful comedy, and eight documentaries about four topics.
It is not all that bad. If I listened to three metal songs last few videos, I am likely to want some more metal in the next one. I am unlikely to want jazz or comedy sketch or kiddy cartoon (despite these being seen from my account last few days).
> Instead, the YouTube homepage seems myopically focused on the last 5-10 videos I watched.
Noticed the same thing happening for the past year or two.
I used to be able to go on YouTube and find a variety of interesting videos I'd never seen before. It seemed like there was a good balance of categories in the recommendations.
Now, I watch one boating video and suddenly my recommendations are 100% boat related with some random clickbait/viral garbage sprinkled in for good measure.
YouTube is weird. I made the mistake of watching a flat earth video last year in order to get a handle on that growing weirdo fad. Well it turns out that flat earthers are really avid content consumers (presumably because of the enormous cognitive reinforcement requirements to maintain such a belief) and YouTube really wanted to help me with that. It took a couple of months for it to stop offering me a portal to a better, flatter world every time I refreshed a page.
Once a video is removed from your history, you'll no longer get recommendations based on it.
If you aren't logged in, you can't view or edit your viewing history, but you can at least clear it. (Which will return you to YouTube's terrible default viewer profile... sigh)
I similarly had that happen with some GamerGate stuff. Watched a video just to see if I could wrap my head around it, went "Nope!" and went on with my life, and the next thing I knew YouTube was desperately trying to send me down the alt-right rabbit hole.
It's a hard problem. If they kept recommending old topics, you'd be like "hey, I quit watching pink zebra videos 5 years ago, stop recommending that stuff to me".
My supposition is that this is a response to changing tastes over time. However, I think 10 videos or less it probably an overreaction to this problem.
For example, let's say they have a user who watches 100 videos a week, for ease of math. 50 of those videos are in "core" areas of interest - these do not change over time. An additional 35 are in areas of secondary interest: topics which have piqued the viewer's curiosity, but not deeply interested them. We can expect these topics to change every [1,4] weeks. The remaining 15 are referrals or clickbait from other websites.
How can YouTube differentiate between these three classes of videos? The first class will be heavily represented in their subscriptions. Presumably, the viewer will prefer their recommendations to ignore the third class (clickbait). The second class is the hardest, as the user may want these videos surfaced, but then want them to decay over time as their interests change.
I think this is the problem that they are attempting to solve, with varying degrees of success.
It's not just over tuning, but finding me things I might have missed. Not necessarily popular things, but thinks that appeal to me.
For example, I get emails from Pocket "You saved a popular article..." And I think "Who cares?" Don't tell me what I know, tell me what I missed. YT is similar. It recommends things I've watched. I'm looking for new and interesting and I'm getting yesterday's news? That doesn't excite me.
I hate this. Several years of nothing but a single genre of music in my history but watch one video about something different and get nothing but recommendations based on that. I have to clean up my watch history daily.
I've moved pretty much all my music listening to YouTube due to simple convenience of easily being able to generate whole genre playlists based on one song.
But because the playlists are dynamic, YouTube keeps on shoving songs from other playlists (and genres) into each other, trying to generate a "perfect playlist" and in the process making all the playlists sound very similar with no more genre distinctions except for the first couple of songs.
yeah, but i get the impression history is retained and utilized. If i watch 2-3 how-to videos i start getting how-to videos that are similar to my history, not just random how-to. then if i switch to skateboarding, i get skateboarding videos similar to my history.
This seems like a good algorithm to me, as when i'm watching skateboard videos with my friends, I don't want "how to caulk tile joints" to show up
Well, it sometimes makes sense. I, and many others, use YouTube as a music streaming service. Most people listen to most songs they like more than once. I dare say yt has gotten a good amount of ad revenue out of me listening to Darren Emerson's dub extravaganza mix of black sky by Shakespeare's sister, which is always in the recommendation panel for some reason...
I concur that.. these days I have to clear my youtube history every few weeks to prevent it from spamming my page (and my fucking TV!) with suggestions related to random videos I watched over a short period of time. The videos I liked over the last 7-8 years seem to carry less weight. I wish there was a way to tune these behavior.
If I had to guess, I would say YouTube is attempting to get you to watch new types of videos you've never watched before. By focusing on recent videos you've watched, they can try to convert you from a 1-off to a regular viewer of a particular genre.
i think this is the right approach, you're not after all, the person you were 5 years ago, or even a week. most interests change with time, I lose interest in a topic very fast, and
gain interest in another topic just as quickly. No good recommender system should be based off a reading of my "personality", whatever that may be - the most stable aspects of my personality, even if they can be divined from my viewing history, say little as to what I would be interested in watching next.
I think that's a fair point, and I wouldn't expect a video I watched 10 years ago to factor in very heavily on what I'm seeing today. But at least in my experience, it doesn't appear that the engine takes anything that happened more than a few days ago into account.
As an example: I watched a few episodes of Penn and Teller: Fool Us yesterday. I hadn't really watched it before, and while I like Penn and Teller in general, I don't remember watching them all that much on YouTube prior to yesterday (I'm sure at some point I had watched a video or two, but not more so than anything else I stumbled on.)
Today, 12 of the top 30 videos on my YouTube home page are specifically Penn and Teller: Fool Us. Not magic in general, not Penn and Teller in general, but specifically that show. That seems like the very recent past is way overrepresented.
This is because they're not trying to give you stuff you'd generally like given your entire history. The recommendation feature powers the part of YouTube that auto-plays the upcoming video.
So e.g. if I go and view Russian dashcam videos they're going to automatically play more of them, even though I've shown no prior interest in that topic.
Having two systems for recommendations would introduce a lot of UI complexity, so I can see why they didn't go for that, and why the recommendations are consequently tuned for people who are actively watching videos on some topic right now.
Having two systems for recommendations would introduce a lot of UI complexity,
Do you mean, in terms of implementation or for the user? Because as the later, I think the two are already conceptually different (homepage vs next video), so I don't see how just feeding it different videos would make it more complex.
For the user. What you're describing is still going to cause complex UX.
So let's say I'm interested in exactly two things. Russian dash cam videos and videos of trains without narration.
I go to the homepage and click on one type of video, what should the next video be? A random video from the two categories? One or the other? Should it show me a banner at the top indicating what recommendation mode I'm in (historical preference or "similar to current video").
Now I go and do the same on my Android YouTube app which has no real UI equivalent of a homepage, what happens then?
This is a lot of UX complexity for a feature few probably care about.
Like I said, I think the homepage is already quite different from the next video in the user's perspective, so there's no need to show banners or anything like that. Homepage = full profile analysis, next video = similar to current.
By the way, the Android app does absolutely have an UI equivalent of a homepage, it's even called "Home".
I mean once the user selects a video you need to continue showing them context, because at this point they may not remember what mode they're in to begin with.
Making your app behave differently because you navigated to the current state via different menus is very bad UX design. That's all I'm saying. Your suggestion would entail either UX complexity or such implicitly different behavior for YouTube.
Yes Android has a "Home" button. But what I meant by no real UX equivalent is that when you open youtube on the web you'll open a new tab and go to youtube.com.
When you do so an Android you've likely just dismissed the app in the past, and opening it again will bring you back to the last video you were viewing. You don't go through the homepage by default.
Thus it's more of just another menu item on Android, not something that's equivalent to / on a website.
I imagine it would be possible (though difficult) to autoplay videos which are related to each other and gradually converge to something that would interest the viewer.
For example, Russian dash cams to Russia at night to the Russian sleep experiment creepypasta, to horror games to video games in general, if that is what the user tends to watch.
I know this has graph theory written all over it and the shortest-distance problem has wreaked havoc for centuries, but I think with enough resources Google/YouTube could find a good compromise in this situation.
tl;dr, as I understand it: when family members Like your Facebook content in relative quick succession, FB apparently interprets it as a signal that it is family-specific content. I didn't see any metrics but this seems plausible.
I think I'm more of a fan of FB than the average web geek, probably because I used it at its phase of peak innocence (college years) and have since weaned myself off to the point of checking it on a less-than-weekly basis. I also almost never post professional work there, nor "friend" current colleagues. Moreover, I've actively avoided declaring familial relationships (though I have listed a few fake relationships just to screw with the algorithm). But wasn't the feature of making yourself a "brand page" and/or having "subscribers" (which don't count toward the 5,000 friend limit) supposed to mitigate this a bit? I guess I'm so used to keeping Facebook solely for personal content (and using Twitter for public-facing content) that I'm out of touch with the sharing mechanics. That, and anecdotal experience of how baby/wedding pics seems to be the most Liked/Shared content in my friend network.
> But wasn't the feature of making yourself a "brand page" and/or having "subscribers" (which don't count toward the 5,000 friend limit) supposed to mitigate this a bit?
If you have a page, Facebook wants to milk you, so they've got this weird algorithm that pushes your posts to only a small fraction of the page's followers, expecting you to start "promoting" them.
Basically they screwed "organic reach".
Oh, and about 3 days ago I created a second Facebook account with the purpose of connecting with software developers and my English-speaking friends (I'm Romanian). I did this thinking that I don't want to share semi-private pictures of family with strangers, or to spam my family and friends with programming stuff.
But only 24 hours later they've disabled my account because of "security concerns", without notice and now I'm waiting on their support to reply after I've sent them my picture for validation.
And another thing - the online parents group from my son's school is on WhatsApp. They tried a Facebook group, but the problem is that when important announcements happen, not all parents receive notifications, so they resorted to something that works.
> But only 24 hours later they've disabled my account because of "security concerns", without notice and now I'm waiting on their support to reply after I've sent them my picture for validation.
I'm not a Facebook's biggest fan, but I think this is a valid security concern (I'm assuming you used the same name as your initial account). Cloning FB accounts and impersonation is a valid threat vector for getting inside someone's network: when accepting friend-requests, most people don't double check if they are already friends with the purported requester.
> If you have a page, Facebook wants to milk you, so they've got this weird algorithm that pushes your posts to only a small fraction of the page's followers, expecting you to start "promoting" them.
While this does enable Facebook to force brands to buy advertising, it is also an user-friendly change.
Nobody who I have spoken to about this thinks that "like/follow" = "I want to see everything they post" on Facebook. People like restaurants they had a nice dinner at and want to publish their support, it doesn't mean they want to see the daily special every day. Even if they "follow" you, it doesn't mean it should show up before a friend's holiday pictures today, and tomorrow there is already new content.
The reality is that most facebook users do not care about the posts from pages they like/follow, even if they pressed the button some time ago. An automated filter that keeps these posts from showing up is good for the user experience.
It's the same as "facebook friends" - they aren't real friends, just people you met once at a party in college. It might be interesting to see a post from them once in 10 years when they get married or move to a new country, but not their daily life. The same applies for brand pages - a like should give you once in 10 years access to their feed, but not more.
I wasn't aware that facebook does partial publishing/pushing from Pages.
I'm co-admin for an org's official Page (and Group). Our bylaws require advance announcement of various things, like meetings and resolutions. We've been using our official Page just like our mail listserv, assuming all our subscribers are getting all of our announcements.
Frankly, this sucks. Minimally, it violates the UX design principle of least astonishment.
> We've been using our official Page just like our mail listserv, assuming all our subscribers are getting all of our announcements.
They will get it if they access your page directly, or subscribe to notifications from your page. It's the same with a personal profile, I believe — not everyone will receive your posts on their news feed unless you've interacted with them a fair bit.
I would be pretty annoyed if I was part of an organization that required me to be keeping up to date on Facebook posts, that used it as the only way to know what's going on. You should definitely have a mailing list people can sign up to. Facebook is not really an appropriate medium for that kind of thing.
I know better than The Algorithm whether I want to see every post or not. But I'm not allowed to decide that because The Algorithm isn't there to serve me it's there to serve Facebook.
I followed specifically to be informed of events. It was quite infuriating when they changed algorithm and I missed them. If I get notices from pages/groups I dont care about anymore, I either unfollow or mute the next time their post annoys me.
Quite a few friends were following pages for the same reason.
> If I get notices from pages/groups I dont care about anymore, I either unfollow or mute the next time their post annoys me.
I do that as well, but I think it's extremely rare among Facebook users. Everyone else I know either:
* Puts up grudgingly with "spam" they no longer want to see, or eventually
* Complains "Facebook is full of crap these days, I'm deleting my account".
A filter algorithm is a much better experience than manual unsubscribe for most users. With how easy it is to "like" anything you see on Facebook and the rest of the web, it is almost necessary to assume that subscriptions should "expire" unless the user keeps actively interacting with the page.
What I want to read is stuff my friends have taken the trouble to type in, all of it whatever the topic. I don't want to read whatever random crap on the web they are reading or what ever. Original user generated content made Facebook and the inability to find my friends' generated content is what makes it not worth reading. I have about 1B ways to get hot memes and trending internet topics. I have relatively few ways to see how all the people in the various corners of my life are doing.
A few years ago I had to manually add a few specific people to my "close friends" list because Facebook decided to stop showing me their posts completely. One of the people was the best man at my wedding, another was my wife. You know, just the two people I care about most in the world. Just completely stopped showing me their posts, but I would see a ton of stuff from my wife's cousin that I'd never liked or commented on. So now I get a notification every time they post something so I can have a chance to see it.
I like Facebook's utility but man their algorithm sucks.
Pretty much this. I see what a couple of people liked on facebook constantly,but I do not see things people I care about wrote nor get information about events I follow groups for.
Well "like" and "follow" should be different things. I follow users on instagram/strava because I want to see what they do. I like the content they post when... well, when it pleases me, or I want it to get some more attention. I deleted my facebook account after nearly 10 years with the service, but I hated the decision they made to show content from pages you've liked (often inadvertently when you meant to like a post rather than the page). I fought against facebook for the last few months as I kept "unliking" every page/profile I had ever liked, but it was hard to keep that clean.
They have changed this recently with respect to pages, so you can follow a page without liking it and vice-versa. However, the default still seems to be that like = follow. It takes additional action to decouple them.
I did this thinking that I don't want to share semi-private pictures of family with strangers, or to spam my family and friends with programming stuff.
Do you know there are friend groups you can use to eliminate that problem? "Family and school," "Programmery Peeps," if you post using either of those groups for visibility then it seems like the issue is moot.
Most people don't use them. It is too hard (10 extra seconds is too hard). Facebook is good at figuring out from the pattern of likes who else will be interested in seeing something. It is only when you have an important announcement that they screw up.
Ah, that explains a lot. I know very little about social media marketing and wondered why my attempts to launch a few pages related to art projects were drawing such little attention from my social circle. Being me I assumed it was because my art is shit and largely abandoned a project I'd spent months developing. Zucked again.
...and it causes the political echo chamber problem which results in extreme polarization of whole societies based on fake perception of what the other side is doing vs what they're actually doing. terrible indeed but generates views thus revenue.
I find this thesis questionable. I'm a very political person and do a good bit of politicking on FB, including my group membership selections. I don't find this echo chamber effect at all, not least because there is a lot of inter-group antagonism (people who stage raids on other groups and so forth). I certainly don't get all my political ideas/news/anything from FB, but it seems more diverse than it is usually characterized as.
Does it make things more polarized, I would say yes. But that is a general thing with the internet and has been true since the creation of usenet; the Internet accelerates everything including political trends. I find the best way to guard against technology's impact on cognition is to maintain time for reading things on paper, and also time free of media consumption.
Only if you mostly have friends of one political kind, I think. I know that I get plenty of "fake news" links shared from all sides in my Facebook feed.
Usually the article contents themselves are fine and quite reasonable, but the Facebook shares have some completely false click-bait headline attached.
> If you have a page, Facebook wants to milk you, so they've got this weird algorithm that pushes your posts to only a small fraction of the page's followers, expecting you to start "promoting" them.
> Basically they screwed "organic reach".
Average user follows way too many pages and has too many friends all posting to see all content every day.
Algorithm isn't here to milk you, it's to stop the news feed from becoming the MySpace bulletin feed (which was an effective way to post in 2007, so long as you posted 5x every hour)
I'd love to believe you. And you are likely right, the algorithm itself isn't there to milk anyone - but facebook surely acts like they want to, and they'll use that algorithm to do so if needed.
I have a page for artwork. Most of the time, facebook will tell me how much better the current post is doing than so many other posts on the page (even if it doesn't seem likely). It reminds me to post after 2-3 days, because my few fans will miss me. I do art and can't actually get things done that quickly, after all. Every post has multiple prompts to boost the post, noting how others with simiilar content have boosted posts. Sometimes it tells me how many hundreds of local people will see it. Sometimes it'll be a general prompt. These even show up as notifacations, sometimes on both my personal and my artist page. Additionally, I get ads for post or page boosts in my normal feed.
I'd like Facebook to have an option to see all posts, without filtering, just as they're posted. It's not hard, it's a simple UX, but it's just not there.
I kind of understand the point on not showing everything: too much people producing too much content (and it's hard to decide whom to unfollow when everyone produces more or less the same percentage of quality and non-quality content), which led me few years ago to immediately abandon Twitter after just a few days. I followed a handful of people and have been just catching up with yesterday's tweets whole morning each day. This seems to be less of a problem those days but it might be subjective.
What makes me go nuts though is stuff disappearing from my FB wall (using mbasic web). Lately I see that a lot: see a few interesting entries on timeline, I click one, come back to the wall and they've disappeared and some random stuff from 3 days ago took their place and I can't find them again.
Same happens in YouTube Android app. See a few interesting videos, click one (I can't "open in new tab" in the app...), go back, they're gone and replaced with something totally unrelated. Ugh.
They used to do that. They had two different news feeds, one with their algorithm and one that was just a real-time feed. Then they scrapped the real-time feed in favor of their algorithm-based feed. It's not that the real-time was hard to do, it's that the algorithm is more easily monetized.
Moving the feed from 'chronological' to 'algorithmic' obfuscates the true quantity of actual content on the user's timeline, thereby allowing a higher density of ads and other sponsored content without the user necessarily being able to tell or prove.
I hypothesized about other benefits and drawbacks here [1].
Yup. It was a huge deal in, what, 2008, when they did this? Almost as big as when they got rid of curated content and turned everyone's link-able "interests" into weird generic "likes" (it was a really strange period for at least a few months as I recall).
For awhile you could switch back every time you logged in, manually, to a chronological news feed.
Instagram was one of the last holdouts but they switched a year or two ago.
Twitter is amazingly bad at this. It takes screenfuls of junk on mobile to get to the chronological stuff.
There used to be a Feed API, but it was removed for privacy reasons... people (understandably) didn't like that their friends could allow another company to read their posts.
There's a lot of API stuff they used to have but don't anymore. Like how I can read events from their API but can't create an event using their API. Not sure why Facebook doesn't want me adding content to their site but have no problem with me pulling content from their site.
It's probably a core part of their scalability that they never have to produce a complete list of posts. Every fetch is on best effort basis within a small time constraint.
They don't even produce complete search results. Go to a group with a long history, and use "search this group" for a keyword that will return over, say, 100 results--some of them a couple of years old.
You won't see many of the older posts, and facebook gives no indication that they have been pruned. It's very hard to find old things without just scrolling through chronologically, which is slow and error prone.
No, that's still from your own bubble that you see on "Top Stories", just ranked "newest first". Not the full list you've subscribed to (as in, every single page and every single friend).
I believe if you create a "group" and then add your entire friends list to it (this involves lots of clicking btw, take a few letters of the alphabet per day) you can accomplish this if you really want.
I feel like these are just shitty models. A good recommendation model would get features like "is_mom" and learn that "is_mom" is a shitty predictor of relevance.
Similarly with Amazon, products should have some sort of 'elasticity' score where it should learn that recommendations of inelastic products is a waste of screen real-estate. I mean, I doubt the model is giving a high % to most of those recommends - it's likely more a business/UX issue in that they've decided it's worth showing you low-probability recommends instead of a cleaner page (or something more useful).
Youtube, on the hand, seems to be precision tuned to get you to watch easy to digest crap. You consume the crap voraciously but are generally left unfulfilled. This is a more difficult problem where you're rewarding naive views rather than a more difficult to discern 'intrinsic value' metric. As a 'long term' business play the model should probably weight more intellectually challenging content just like fast food restaurants should probably figure out how to sell healthier food because by pedaling crap you're only meeting consumer's immediate needs, not their long term ones.
Yesterday I sent one of my friends a link to an old - 4.5 years old, from 2013 dec - entry he wrote as a Facebook note. There were 70+ likes, 30+ commenters and 110 comments on it.
He added a new comment yesterday - I only saw it, because I randomly decided to read through the comments.
Those who commented on it should have received a notification - well, in the end, 2 people got something.
This is how you effectively kill conversation - which dazzles me, because keeping conversations running = engagement, which is supposed to be one of the end goals.
I get the "need" of a filter bubble, even though I'd simple let people choke on the amount of crap they'd get if follow would actually mean get everything - they may learn not to like things without thinking.
But not sending notifications at all? Why? Why is that good?
My guess is the point of those notifications is that it gives them an excuse to add points to the addictive little red number. It's not because they think you'll actually care.
Facebook also has a serious problem in that its news feed is a content recommendation engine with only positive reinforcement but no negative reinforcement. So you end up with a ton of false positives even when actively interacting with the content, and their system doesn't even know how wrong it is.
And should you really not like some content, the solution is unfriending the poster, rather than simply matching against that type of content (political, religious, etc).
The fact there isn't a private dislike button (that no one sees you clicked other than Facebook), is remarkable at this point. It's either woefully obtuse, or intentional so that a feed of false positives better hides moderately targeted ads.
"The fact there isn't a private dislike button (that no one sees you clicked other than Facebook), is remarkable at this point. It's either woefully obtuse, or intentional so that a feed of false positives better hides moderately targeted ads."
I vote the last option. I think there was an explanation, basically saying it would make people upset or sad and they wanted to avoid that. The solution wound up being a variety of emotions to pick from (which counts more than a simple like). They included negatives like "angry" or "sad".... I'd still just rather have a simple dislike button.
Not really? It's a very different system, there's no plausible way to have a "private" downvote when the "recommendation system" here is entirely crowdsourced rather than personalized. I suppose the hide button is sort of an equivalent, but I don't think the systems are really comparable.
Pruning my friends improved my experience, but they grow back over time as people rediscover you or you make new acquaintances. I wonder if you could make a game or website that forced you to prune your facebook friends down, perhaps by looking at your social graph and telling you that you haven't messaged with a person in x years or something, or that you've never even "liked" something they posted. Some heuristics to determine who you really wouldn't miss. The problem with unfriending is that Facebook's UI makes it almost impossible to do in batches, and it feels kind of rude to "unfriend" someone, even though I personally wouldn't care if someone I don't talk to unfriended me.
A game rewarding you for removing Facebook friends? Burger King did that years ago, got quite a bit of exposure thanks to it: sacrifice 10 friends, get a Whopper:
Here's a trick to constantly pruning down your friends list. Don't worry about pruning down your whole list, just check the people who have birthdays every day you log in. I just checked; there was a single friend who had a birthday today, and I haven't had any contact with her in years. That makes for a nice easy prune.
Of course the downsides are that you risk disappointing someone who obsessively checks their friend count on their birthday, and that it only really works if you log in every day.
If you don't care enough to talk them in a year, then I doubt you care about "disappointing someone who obsessively checks their friend count on their birthday"
It isn't really outrageous when you consider their core business model and the value of keeping you in a widespread graph of people. Someone with some time could easily do it with a plugin manipulating the JS on their friends page though.
I did this. One day I just decided to make my Facebook account family only - removed hundreds of "friends". It worked great for me. I don't idly check it so often (because there are fewer updates), but I still get to see what people I care about are up to.
There is a very simple solution for this issue. Create a Facebook Page for yourself as a brand, post links to your articles on that page, then share it from your personal Facebook page.
I have a private Facebook account with the people I personally know, and a Facebook page to publish things for my blog. And, oh boy, where do I start with my complaints!
1. So, the problem of the echo chamber is still there. If I look at the last 30 things I've posted, it's always the same group of people that interact with it. One of them starts, next ones follow, and usually, once those 10 people see it, the interactions end. At the end, it reaches some 400 people according to Facebook, out of which ~40 people click on it, and 10 regular people interact with it. Sometimes my posts get shared by multiple other Facebook pages, ranging from hundreds to six thousands likes. It makes difference for the Facebook "reach", but not on the clicks nor the interactions (maybe there's one lone wolf who decided to follow it up and like it).
2. I'm still being stuck in that "tax" that he mentions within the introduction. Everything on Facebook needs to be hand curated, needs to be thoughtfully thought out, and needs to be published within the Facebook platform. For my last two articles, I wanted to reach as many people who like my Facebook page as possible. This was not achievable by any sort of sharing from my website and I had to either pay money or post it as a Facebook "Note". I've decided on the latter, since the project articles were about was already hurting me financially.
3. Instability. Whenever I open my Facebook page, my browser is either really close to crashing, crashes, or simply doesn't load the page properly. It doesn't happen anywhere else on Facebook, exclusively on my own page. Just now, I wanted to scroll down to see the reach of my latest two page updates, all posted within the last six hours. Took me three minutes and multiple refreshes. Same thing happens with every other browser I've tried to use.
4. Oh my god, the spam! Every single Facebook update I post, there's something in my notifications. Either it is to boost the post, or is it more popular than X% of my previous posts, or my page had X views, Y likes, and Z shares, or some random guy is somehow ranked higher than the rest of the likes enough that his like deserves his own notification. Like, whatever I post, there's some bullshit notification that I don't want to see, and there is no option for me to fine tune it.
4. Oh my god, the spam! Every single Facebook update I post, there's something in my notifications. Either it is to boost the post, or is it more popular than X% of my previous posts, or my page had X views, Y likes, and Z shares, or some random guy is somehow ranked higher than the rest of the likes enough that his like deserves his own notification. Like, whatever I post, there's some bullshit notification that I don't want to see, and there is no option for me to fine tune it.
I have buckets of hate for this. And they seem to be obvious bullshit. Though, you forgot the infamous, "you haven't posted in 3 days! People want to hear from you!" ... which truly isn't true nor possible. My page is an art page, and I don't have new content twice a week.
This first one "more popular than X% of my previous posts" is the most obvious bullshit to me, because the number keeps changing! Sometimes it's 70%, sometimes 80%, sometimes 95%.
Oh, and I forgot to even mention the "boost it for free!" that I get from time to time, even though it's not really free, it just says so on the notification. A friend of mine fell for that and was charged. I clicked on it and, luckily, I didn't enter my credit card data in the past, so I was instantly suspicious when they asked for it.
Yeah, it most definitely is. I share the posts in art groups - sometimes from the page, sometimes the photo, depending on the group rules. I have a general idea which pieces are more popular, and it definitely doesn't match up to their statistics.
And oh my... though, I'm now curious if he happened to notice any actual results from it or if the negative reports still hold true.
It would be interesting if Facebook allowed personal users to access the brand tools that pages get (impressions, distribution, etc).
This won't happen because facebook wants you to think everyone is seeing your posts. People would be livid if they knew just how low of a % of friends were actually viewing their stuff due to algorithmic meddling.
Do you understand enough about how the algorithm is working to be sure this solution is effective? Even if your Mom still immediately 'likes' it? What if she 'likes' your brand too, then sees all brand posts on her timeline, and still immediately likes them?
I suspect that the algorithm for familial relationship boosts is no longer a factor once the content is posted under a separate Brand.
For all we know, the "mom" factor isn't weighted for things like external links. The OP is making that suggestion, but perhaps Facebook thought of that and discounts the relationship if it's (a) an external link or (b) the family member likes more than X% of posts.
Some proof or data to back up the article's claim would be great. I'm not really buying it.
If moms auto-like every post, then how is that a relevant signal? Everyone has a mom. That would mean every post is getting penalized in the same way (which effectively means no posts are getting penalized).
And if circumventing this was as simple as excluding his mom, wouldn't the effect be even greater if he excluded all non-technical friends and family?
Which pretty much just means you're posting this for the greater public, which presumably a lot of users of Facebook's API already do. Since his intention is for his content to be seen by the greater public, then... go ahead and tell the API that?
It's a great angle for an article, and it's very shareable, but he provides no data (even though he seems like someone who would have all of the data).
Have you ever had a post that ended up being liked by an odd subset of your Facebook friends? I certainly have had posts, that for some reason, only my high school friends end up liking.
I know this is only another anecdote and real data would be preferable. But I'm convinced that the described behavior absolutely happens.
I'm pretty sure this description is wrong. My impression is facebook shows your content to a subset of friends and then classifies it based on likes received. If your mom 'like's 9/10 posts and your other friends like 3/10 posts, then 60% of your posts /are/ family content. Even if they're about mathematical theories.
This is the case if Facebook is only classifying it, but if it's using those likes to determine which subset of friends see your content then it becomes self-reinforcing. E.g. do your friends see all 10 posts, or do they only see 5 because the first few people to interact with the other 5 were family members?
It seems like what he is claiming is that his mom instantly clicks like on everything and so the post gets gated before his other friends ever have the opportunity to indicate whether they like it.
A lot of this seems like it comes down to how that initial pool of users is treated. If they show it to 5 users and his mom is the only one (ever) that likes it, then it makes sense to show that content to people common to him and his mom. If the other users like it, they should distribute it more widely and maybe include users from other non-overlapping networks. That all seems good. If mom likes 90% of the posts, then show that 90% to the family and maybe they care and maybe they don't. If it gets engagement from other people, share it more widely. That seems like a good approach.
The wrong way to do it would be to show it to 5 people and as soon as his mom likes the content, decide the content won't be interesting to the other 4 people they showed it to. I haven't seen any evidence of that happening, and I don't think the article points to any (anecdotal or otherwise).
Well, like I said to the other guy, I don't know how you account for the fact that he gets engagement from other people if he hides the posts from his mom then.
The missing detail here of course is...maybe his posts aren't getting engagement outside of mom because his Facebook 'friends' aren't interested in them?
I realize there isn't an extreme level of rigor here, but it gets tiresome reading HN comments where it's just a series of increasingly extravagant hoops for the original article to jump through before the commenter is willing to accept that the findings might be valid and give any consideration to the article's implications.
But if your mom likes all of your posts, then her likes become a less predictive signal. At that point, she is almost like a bot in terms of signal value.
This article is about how while Facebook's intention is to increase engagement, it backfires in this specific case because the algorithm takes his mother's Like and interprets that as "this post will get the most engagement from family" when actually it was more targeted to outsiders.
I understood it very well. The issue is that his case is an edge case and facebook likely doesn't care--the algorithm creates more engagement than it destroys.
I generally hold facebook in contempt for the forced filtering that they subject me to. Making the 'sort posts chronologically' flag come unstuck is a dirty hack that they should be ashamed of.
I'm not a machine learning expert, but isn't this an easily solved problem?
Similar to TF/IDF, where you mitigate common words by dividing by their overall frequency, you should be able to divide the weight of any particular "like" by the frequency of likes between the two people. That way a genuine expression of interest by an acquaintance is weighted far higher than a relative or close friend that reflexively likes everything you post.
> I’d love to hear from others who’ve seen a similar effect and love their mothers (or other close loved ones) enough to not cut them out of their Facebook lives.
I think the bigger issue is family members, friends and relatives who do cut out their non-fb using closed ones by ignoring all other methods of telecommunication. "Oh, you didn't knew we planned a wedding, too bad you're not on fb!"
Even aside from complicated questions like the Newsfeed algorithm, when a friend started hitting Like on nearly every post of mine I appreciated their caring but mentally discounted the meaning of their Like in terms of being a reaction to the content of my post. It's like "Like Inflation". So the algo should probably do the same about indiscriminate likes...
>For quite a while now, I’ve been publishing most of my content to my personal website first and syndicating copies of it to social media silos like Twitter, Instagram, Google+, and Facebook. Within the Indieweb community this process is known as POSSE an acronym for Post on your Own Site, Syndicate Elsewhere.
YMMV on if Facebook sucks for you, your experience is probably the exception. Personally, I find it very valuable despite it's many flaws. It has made making new friends a lot easier (and I absolutely needed local friends after I relocated). I've used it to reconnect with a couple childhood friends which has been absolutely amazing and I'm super grateful for that.
Also, my mother is very toxic and abusive so I'd really rather not call her, however, I can acknowledge that my experience calling my mother is probably the exception. Additional values comes with being "Facebook friends" with my mom, it reduces her need to pick up the phone and harass me.
> Facebook, despite the fact that they know she’s my mom, doesn’t take this fact into account in their algorithm.
Wouldn't it also be possible to analyze the content of the post to determine if it's family-related? It seems like with a math or technical post, that should be easy for FB to do.
Why would that imply that the rest of the family would want to c it? The suggestion here isn't hiding it from mom, it's avoiding interpreting a click from mom as "this is something primarily family cares about."
Offtopic: It always baffles me when people "suggest" better ways to customize their "internet feed" because they don't realize how much information the "system" need to know about you (or those persons close to you) in order to make it useful and when confronted/informed about it, they explicitly denied such permission because it undermines their privacy.
Unrelated but worse problem: top of feed livelock. If you're below the fold, cause, I dunno, you got shit to do, you are only going to get viewed by heavy scrollers, which overly favors (IMHO, as somebody with shit to do) folks who get on the thread first. Even a one-dimensional "rank by number of uplikes" filter still doesn't calculate your likes/views ratio, which is what you'd actually care about.
This is probably not a facebook problem, as facebook serves you content based on.... god only knows what. But HN? Reddit? Tweeter? Rampant. Real bad for engagement, as anybody looking at a post with more than a few comments knows not to even bother. Gots to get that engagement, you know?
I feel it's not limited to family. Empirically I've noticed liking of a post factors in how much someone likes the poster then just the contents of a post.
I haven't found Facebook to be very good at recommending things. They often don't seem to be able to tell what people are interested in, and they don't really let users control who they're posting to. For example, they should make it easier to just post to people who live in the same city...
I'm pretty sure they're using a machine learning algorithm, and it's determining the way to handle your post. Can someone who understands the ML algorithms better than I explain how this would interact with the feature weights? I'd be curious as to how we think that would play out.
I also have a problem with social media in general, especially with following people instead of institutions/groups. Usually what 99.9% of people like is totally not what I like. So if you base the content I should consume on the assumptions that I like what my connections liked you are nearly going the opposite direction of what I want.
PS: Maybe some of you have the experience of having an active following. I notice that many social networks like Twitter, FB, Youtube, allow comments. But almost never does the content creator/sharer actually react to comments. Some may use comments in future content, but some don't react at all. Are these people not even reading the comments? Why are people commenting when it's so obvious that it's just going down a black hole? For instance, on Twitter a share with additional text is nearly the same amount of work than a comment. And it's obvious that you reach more people by share-commenting rather than simply writing your comment underneath the content. So why do people do that? And why does Twitter has the option to comment even?
Had an aha moment reading this post. It makes sense now. This sentence mirrors my own recent experience: "These kinds of things are ones which I would relay to them via phone or in person and not post about publicly."
I find this interesting: So much talk about the technical solutions both in the post and in the comments here. Yet, it doesn't seem like he asked his mom not to like his tech posts. If that's the goal - why not start there?
It's a more general problem than that. Maybe Facebook should penalise posts written by popular authors, celebrities or recognizable brands to offset the popularity (rich-get-richer) factor and select only for quality.
Interesting post. I clicked thinking someone had coined a clever new "NASCAR Dad" moniker about parents who read and parrot their Facebook echo chamber at Thanksgiving, but was pleasantly surprised.
Personally I find the power differential more disturbing than the economic aspect (which is still pretty disturbing.)
FB (and Google and Amazon and ...) have enormous power with only the crudest of checks, an their users cannot fully understand how that power works and affects them.
Judged by this thread, I enjoy Facebook posts far more than your comments.
Ergo, in my opinion, Facebook is the signal and your posts are the noise.
I know it's not fair to compare a couple of 'activist' comments to an entire social network, but I felt it was worth pointing out that this style of "content" is more destructive to many audiences than Facebook is.
Network effect, and entrenched enemy are an issue here. You'd have to build something much better than Facebook to start eating away at that monster and disrupt the network. I loved Google+ and Mastodon but they're not 10x better.
This is not a problem, and I certainly hope Facebook does not fix it. Why? Because it forced the OP to narrow down his audience and show the post only to those who would enjoy it.
That's a much better experience than everyone trying to push everything they publish to you.
The problem he is referring to is not that facebook narrows down the audience based off of likes. His problem is that his mom, and consequently other family members, always immediately likes his posts regardless of the content which narrows the audience down to his family members instead of the intended audience. He wants Facebook to take into account that the person liking his post is his mom before narrowing down the audience to family members.
The post is intended for an audience including his mom (she interacts with his posts by liking them). The author's argument is that Facebook's algorithm is (incorrectly) assuming that her interaction is based on interest in the actual content, rather than her association with the author.
Sure it is. He even says that he later changes the settings to his mom sees it. He wants his mom to see it, he just doesn't want Facebook to get the wrong interpretation by his mom liking it.
His mom is enjoying his content. However, facebook assumes that since his mom enjoyed his math or programming blog post, the blog post must have been baby pictures of his child uninterested to friends. Therefore, his friends don't get to see his programming articles.
Basically, your posts about work/hobby etc get penalized if your family likes them.
>.. shame on Facebook for torturing them for the exposure when I was originally targeting maybe 10 other colleagues to begin with.
Seems like the facebook algorithm is actually working for the users by in effect blocking insipid idiots from posting their crap trying to game the system.
You don't 'target' colleagues.. colleagues are people you work with and respect.. not try to spam.
He's clearly on a mission to be 'known'... he spends an awful lot of time and effort worrying about how his posts rank and perform... to me, that's disingenuous.
>Could you fix this algorithm problem please? I’m sure I’m not the only son or daughter to suffer from it.
He's asking facebook to fix his mom? The algorithm isn't broken.
It's simple really, just stop participating in that evil experiment. From the outside, you look like morons; talking to opposite sides of an algorithm while interpreting what comes out as reality. It's been proven over and over again that consuming that crap makes everyone feel bad and hate each other. There are plenty of alternatives, but this one is mine: https://github.com/andreas-gone-wild/snackis
I was rather thinking about consuming bullshit lies about other peoples fabulous lives, algorithmically arranged for maximum impact. But yes, bragging about the size of your intellect falls in the same basket. Google it if you desperately need authority to back up common sense. Most of the stuff people post on Facebook just wouldn't be posted in a less twisted medium, and we would all be better off. I know I'm growing tired of watching adults, companies and even authorities do anything for a few like-clicks. It's sick and corrupting, and it needs to stop.
If you think that posting things that interest you is bragging about your intellect, I think that says a lot more about deep-seated problems you may have than it does about Facebook.
The thing is, I agree with you that many people's Facebook usage probably isn't particularly good for their psyche, but that doesn't mean that it's impossible to use it healthily (at its core, it's just a communications medium) and the assumption that articles about theoretical math is "bragging about the size of your intellect" is honestly just pathetic.
> Most of the stuff people post on Facebook just wouldn't be posted in a less twisted medium
The article clearly notes that he crossposts to Facebook. If you think someone posting professional content on their personal website is a twisted medium, I'm really curious about what isn't twisted in your view.