I really think we're entering a crisis here where sites and apps that are engineered to maximize corporate metrics are leading people down horrible paths of addiction, and psychic stress as they spend their mental energy resisting temptations constantly thrown at them.
I've made plenty of "remove ads" in-app purchases on my phone. This isn't too different. And it might actually result in a truly useful experience.
I would be very very willing to pay for Facebook to simply not be tracked, especially at such a reasonable price. I know that won't happen, but I'd really love such a thing.
However, they do charge money per month for YouTube Red which removes ads, and a better recommendation system for Red subscribers might encourage people to buy it.
On Netflix, for example, I get recommended stuff that's similar to other things I didn't like.
Amazon is often the best of them for books because their engine seems to only really value the last few things I looked at / bought and so just recommends really similar stuff. Of course, that fails hard whenever I look at something I'm not interested in for whatever reason.
On Steam I wish they had some elaborate filtering system, there are so many games, I need something like NOT(Adventure+Puzzle) BOOST(RPG), I need to combine tags, not just filter them out by themselves.
The problem with those recommendation engines is that they're not optimized to serve your needs. They're optimized to serve the goals of their respective companies. And the problem comes when your needs are a little incompatible with the needs of the company.
Consider Netflix for example. Their recommendations don't seem to care much about what you actually enjoy the most. They pay different amounts of money to let you watch different content. So your goal of watching the thing you would enjoy the most is different from whatever the hell their goals are-- probably to get you to watch just enough Netflix to make you not want to cancel your subscription, but not run up their bills on bandwidth and licensing fees.
I'm absolutely confident Netflix could make amazing recommendations-- and probably already has them internally. But it's not in their best interest to give recommendations that are in your best interest.
To the extent that I'm right about this it could be a market opportunity to make an honest and useful recommendation service.
Ultimate they are compatible though. And yes I mean in the econ 101 way. This tension should be at least partially resolved via pricing scheme innovation.
Regardless of how good the recommendation engine might be, it's still always subject to human perception. That can be unpredictable and fickle, on a good day :)
"Oh, you bought a wallet? Here are twenty wallets of the exact same type but with slightly different colors that you might want to buy, too"
That's probably not exact and I don't remember where in my social feeds it came from, but I loved this.
One per floor per house, of course. You don't want to have to carry them up and down stairs.
All houses should have one floor. Why use stairs or elevators when you can just buy more land?
There was however one moment where Amazon got it right: they recommended to me a book that only a week earlier I had purchased on a whim from an independent bookstore with cash. Creepy good.
That is creepy. Any way it might've been more than just a coincidence?
Though that was Google.
A while ago I took an Ayurveda class (Indian medicine) and part of it was doing an oil enema. There are devices for this that are also used by a certain population for sexual pleasures. Bought one on Amazon and for months I couldn't show my Amazon page to other people because it was filled with sex toys....
Your indifference is subsumed in the sheer size of others' consumption.
I mean I'm not accessorizing with the latest seasonal fashions, but there are utility reasons to change the kind of wallet you're carrying for different purposes.
Whether the buttons do anything, I have no idea
Their % match numbers are fairly accurate, but I have had to go into the watch history and delete the occasional movie watched and finished that we actually hated. No number of 1-stars (or thumbs downs) would eradicate its effects on the recommendations.
5 - Absolutely loved it, will buy a disc
4 - Good, but won't buy a disc
3 - Movie was okay
2 - Not a good movie
1 - Stopped watching 20 minutes into it
Asking people "which of these 3 things you like best" vs. "rate these 3 things 1-5" will usually give you much more useful data, plus be easier for respondents.
Excerpt from https://www.appcues.com/blog/rating-system-ux-star-thumbs/
"In addition to the new rating system, Netflix has new match percentages (up to 100%) to more accurately predict how much users will like something.
The new rating system received 200% more ratings in A/B tests, according to Netflix VP of Product Todd Yellin.
When it comes to rating movies and shows, stars reflect the preferences that people want to have, rather than how people actually behave. Todd gave the example of users giving 5 stars to a documentary but just 3 stars to an Adam Sandler movie that they watched over and over again. “What you do versus what you say you like are different things,” said Todd."
Their users are surely not confused about that. So why does Netflix want to present a prediction as a rating? Is it to flatter their users by telling them that the thing that feels instantly gratifying right now is actually an amazing movie? "Hey, great choice. Billy Madison is a five-star movie. What? Why would you feel bad about not watching Raging Bull instead? It's a two-and-a-half star movie at best."
In other words, Netflix, like Facebook, like Doritos, is engineering itself for maximum addictiveness without regard for honesty. It will shelter you from even what you know and reassure you that whatever triggers a pleasure response in your brain is the best. Relax and enjoy it.
The truth is that we consume easy things a lot more often than we challenging things. It would be exhausting otherwise. But the best things are often the most challenging. We know that, we know that easy movies are just a way to kill time, but Netflix wants to do us the service of helping us forget it, because then we might be 1% more at ease when we watch Netflix and 1% less likely to switch to another service.
There's a reasonable objection to this behavioral definition of "like", which is that it doesn't actually make people's lives better, it just fills them with more compulsive behavior. It's not necessarily "irrational" to wish you were more patient, or to want to ask Netflix to show you useful things rather than useless fluff. That you occasionally betray your stated goals does not mean you should be denied the right of self-definition.
In other words, what people say they like is more important, to me at least. See eg "Thinking Fast and Slow" by Kahneman.
I miss the days of "tap tap scroll four clicks" on the iPod to help me rate new music, specifically.
A person is smart. People in general are dumb.
This only works because YouTube stays focused on what you're doing without trying to build a bigger picture of who you are. It would be nice if it could somehow do both, but I don't have a clear picture of how that UI would function.
In other words it doesn't consider you as someone with a long history. You can change your profile from a conservative to a liberal in a few hours watching videos. Whether it's possible to have a balanced amount of crazy ( not the same as a centrist) is something I'm currently working on, it requires effort!
It's a classic race-to-the-bottom. If YouTube lets Facebook keep you addicted, you won't get to the long-term.
http://timewellspent.io/ is good for perspective
Personally, I find YouTube terrible at keeping me engaged. Even when I want to keep watching videos, their sidebar fails to show me anything I want to click on at least 1/3 of the times.
This is too fine a point. A user will "select" a default choice by doing nothing, and it's in YT's interests to provide default choices that reward the company. This has been researched for decades and is supported by, among other things: https://en.wikipedia.org/wiki/Status_quo_bias
But why? If I've watched hundreds of Zeni Geva videos over the years, but it's my fifth Tool video, why would I be more likely to click on a ad on a Korn video?
Of course it's possible they have, and realised they still make more money by focusing on the more recent videos.
Noticed the same thing happening for the past year or two.
I used to be able to go on YouTube and find a variety of interesting videos I'd never seen before. It seemed like there was a good balance of categories in the recommendations.
Now, I watch one boating video and suddenly my recommendations are 100% boat related with some random clickbait/viral garbage sprinkled in for good measure.
Once a video is removed from your history, you'll no longer get recommendations based on it.
If you aren't logged in, you can't view or edit your viewing history, but you can at least clear it. (Which will return you to YouTube's terrible default viewer profile... sigh)
For example, let's say they have a user who watches 100 videos a week, for ease of math. 50 of those videos are in "core" areas of interest - these do not change over time. An additional 35 are in areas of secondary interest: topics which have piqued the viewer's curiosity, but not deeply interested them. We can expect these topics to change every [1,4] weeks. The remaining 15 are referrals or clickbait from other websites.
How can YouTube differentiate between these three classes of videos? The first class will be heavily represented in their subscriptions. Presumably, the viewer will prefer their recommendations to ignore the third class (clickbait). The second class is the hardest, as the user may want these videos surfaced, but then want them to decay over time as their interests change.
I think this is the problem that they are attempting to solve, with varying degrees of success.
For example, I get emails from Pocket "You saved a popular article..." And I think "Who cares?" Don't tell me what I know, tell me what I missed. YT is similar. It recommends things I've watched. I'm looking for new and interesting and I'm getting yesterday's news? That doesn't excite me.
But because the playlists are dynamic, YouTube keeps on shoving songs from other playlists (and genres) into each other, trying to generate a "perfect playlist" and in the process making all the playlists sound very similar with no more genre distinctions except for the first couple of songs.
This seems like a good algorithm to me, as when i'm watching skateboard videos with my friends, I don't want "how to caulk tile joints" to show up
But for me it is just silly.
As an example: I watched a few episodes of Penn and Teller: Fool Us yesterday. I hadn't really watched it before, and while I like Penn and Teller in general, I don't remember watching them all that much on YouTube prior to yesterday (I'm sure at some point I had watched a video or two, but not more so than anything else I stumbled on.)
Today, 12 of the top 30 videos on my YouTube home page are specifically Penn and Teller: Fool Us. Not magic in general, not Penn and Teller in general, but specifically that show. That seems like the very recent past is way overrepresented.
So e.g. if I go and view Russian dashcam videos they're going to automatically play more of them, even though I've shown no prior interest in that topic.
Having two systems for recommendations would introduce a lot of UI complexity, so I can see why they didn't go for that, and why the recommendations are consequently tuned for people who are actively watching videos on some topic right now.
Do you mean, in terms of implementation or for the user? Because as the later, I think the two are already conceptually different (homepage vs next video), so I don't see how just feeding it different videos would make it more complex.
So let's say I'm interested in exactly two things. Russian dash cam videos and videos of trains without narration.
I go to the homepage and click on one type of video, what should the next video be? A random video from the two categories? One or the other? Should it show me a banner at the top indicating what recommendation mode I'm in (historical preference or "similar to current video").
Now I go and do the same on my Android YouTube app which has no real UI equivalent of a homepage, what happens then?
This is a lot of UX complexity for a feature few probably care about.
By the way, the Android app does absolutely have an UI equivalent of a homepage, it's even called "Home".
Making your app behave differently because you navigated to the current state via different menus is very bad UX design. That's all I'm saying. Your suggestion would entail either UX complexity or such implicitly different behavior for YouTube.
Yes Android has a "Home" button. But what I meant by no real UX equivalent is that when you open youtube on the web you'll open a new tab and go to youtube.com.
When you do so an Android you've likely just dismissed the app in the past, and opening it again will bring you back to the last video you were viewing. You don't go through the homepage by default.
Thus it's more of just another menu item on Android, not something that's equivalent to / on a website.
For example, Russian dash cams to Russia at night to the Russian sleep experiment creepypasta, to horror games to video games in general, if that is what the user tends to watch.
I know this has graph theory written all over it and the shortest-distance problem has wreaked havoc for centuries, but I think with enough resources Google/YouTube could find a good compromise in this situation.
I think I'm more of a fan of FB than the average web geek, probably because I used it at its phase of peak innocence (college years) and have since weaned myself off to the point of checking it on a less-than-weekly basis. I also almost never post professional work there, nor "friend" current colleagues. Moreover, I've actively avoided declaring familial relationships (though I have listed a few fake relationships just to screw with the algorithm). But wasn't the feature of making yourself a "brand page" and/or having "subscribers" (which don't count toward the 5,000 friend limit) supposed to mitigate this a bit? I guess I'm so used to keeping Facebook solely for personal content (and using Twitter for public-facing content) that I'm out of touch with the sharing mechanics. That, and anecdotal experience of how baby/wedding pics seems to be the most Liked/Shared content in my friend network.
If you have a page, Facebook wants to milk you, so they've got this weird algorithm that pushes your posts to only a small fraction of the page's followers, expecting you to start "promoting" them.
Basically they screwed "organic reach".
Oh, and about 3 days ago I created a second Facebook account with the purpose of connecting with software developers and my English-speaking friends (I'm Romanian). I did this thinking that I don't want to share semi-private pictures of family with strangers, or to spam my family and friends with programming stuff.
But only 24 hours later they've disabled my account because of "security concerns", without notice and now I'm waiting on their support to reply after I've sent them my picture for validation.
And another thing - the online parents group from my son's school is on WhatsApp. They tried a Facebook group, but the problem is that when important announcements happen, not all parents receive notifications, so they resorted to something that works.
Facebook is freaking terrible.
I'm not a Facebook's biggest fan, but I think this is a valid security concern (I'm assuming you used the same name as your initial account). Cloning FB accounts and impersonation is a valid threat vector for getting inside someone's network: when accepting friend-requests, most people don't double check if they are already friends with the purported requester.
While this does enable Facebook to force brands to buy advertising, it is also an user-friendly change.
Nobody who I have spoken to about this thinks that "like/follow" = "I want to see everything they post" on Facebook. People like restaurants they had a nice dinner at and want to publish their support, it doesn't mean they want to see the daily special every day. Even if they "follow" you, it doesn't mean it should show up before a friend's holiday pictures today, and tomorrow there is already new content.
The reality is that most facebook users do not care about the posts from pages they like/follow, even if they pressed the button some time ago. An automated filter that keeps these posts from showing up is good for the user experience.
It's the same as "facebook friends" - they aren't real friends, just people you met once at a party in college. It might be interesting to see a post from them once in 10 years when they get married or move to a new country, but not their daily life. The same applies for brand pages - a like should give you once in 10 years access to their feed, but not more.
I'm co-admin for an org's official Page (and Group). Our bylaws require advance announcement of various things, like meetings and resolutions. We've been using our official Page just like our mail listserv, assuming all our subscribers are getting all of our announcements.
Frankly, this sucks. Minimally, it violates the UX design principle of least astonishment.
They will get it if they access your page directly, or subscribe to notifications from your page. It's the same with a personal profile, I believe — not everyone will receive your posts on their news feed unless you've interacted with them a fair bit.
Quite a few friends were following pages for the same reason.
I do that as well, but I think it's extremely rare among Facebook users. Everyone else I know either:
* Puts up grudgingly with "spam" they no longer want to see, or eventually
* Complains "Facebook is full of crap these days, I'm deleting my account".
A filter algorithm is a much better experience than manual unsubscribe for most users. With how easy it is to "like" anything you see on Facebook and the rest of the web, it is almost necessary to assume that subscriptions should "expire" unless the user keeps actively interacting with the page.
I like Facebook's utility but man their algorithm sucks.
Current version od algorithm is not working.
Do you know there are friend groups you can use to eliminate that problem? "Family and school," "Programmery Peeps," if you post using either of those groups for visibility then it seems like the issue is moot.
Does it make things more polarized, I would say yes. But that is a general thing with the internet and has been true since the creation of usenet; the Internet accelerates everything including political trends. I find the best way to guard against technology's impact on cognition is to maintain time for reading things on paper, and also time free of media consumption.
Usually the article contents themselves are fine and quite reasonable, but the Facebook shares have some completely false click-bait headline attached.
> Basically they screwed "organic reach".
Average user follows way too many pages and has too many friends all posting to see all content every day.
Algorithm isn't here to milk you, it's to stop the news feed from becoming the MySpace bulletin feed (which was an effective way to post in 2007, so long as you posted 5x every hour)
I have a page for artwork. Most of the time, facebook will tell me how much better the current post is doing than so many other posts on the page (even if it doesn't seem likely). It reminds me to post after 2-3 days, because my few fans will miss me. I do art and can't actually get things done that quickly, after all. Every post has multiple prompts to boost the post, noting how others with simiilar content have boosted posts. Sometimes it tells me how many hundreds of local people will see it. Sometimes it'll be a general prompt. These even show up as notifacations, sometimes on both my personal and my artist page. Additionally, I get ads for post or page boosts in my normal feed.
The algorithm is just a part of the milking.
What makes me go nuts though is stuff disappearing from my FB wall (using mbasic web). Lately I see that a lot: see a few interesting entries on timeline, I click one, come back to the wall and they've disappeared and some random stuff from 3 days ago took their place and I can't find them again.
Same happens in YouTube Android app. See a few interesting videos, click one (I can't "open in new tab" in the app...), go back, they're gone and replaced with something totally unrelated. Ugh.
I hypothesized about other benefits and drawbacks here .
For awhile you could switch back every time you logged in, manually, to a chronological news feed.
Instagram was one of the last holdouts but they switched a year or two ago.
Twitter is amazingly bad at this. It takes screenfuls of junk on mobile to get to the chronological stuff.
You won't see many of the older posts, and facebook gives no indication that they have been pruned. It's very hard to find old things without just scrolling through chronologically, which is slow and error prone.
Drives me insane.
Hamburger menu > Scroll way down to the "Feeds" section > "Most Recent"
FWIW I would like that option as well.
Similarly with Amazon, products should have some sort of 'elasticity' score where it should learn that recommendations of inelastic products is a waste of screen real-estate. I mean, I doubt the model is giving a high % to most of those recommends - it's likely more a business/UX issue in that they've decided it's worth showing you low-probability recommends instead of a cleaner page (or something more useful).
Youtube, on the hand, seems to be precision tuned to get you to watch easy to digest crap. You consume the crap voraciously but are generally left unfulfilled. This is a more difficult problem where you're rewarding naive views rather than a more difficult to discern 'intrinsic value' metric. As a 'long term' business play the model should probably weight more intellectually challenging content just like fast food restaurants should probably figure out how to sell healthier food because by pedaling crap you're only meeting consumer's immediate needs, not their long term ones.
He added a new comment yesterday - I only saw it, because I randomly decided to read through the comments.
Those who commented on it should have received a notification - well, in the end, 2 people got something.
This is how you effectively kill conversation - which dazzles me, because keeping conversations running = engagement, which is supposed to be one of the end goals.
I get the "need" of a filter bubble, even though I'd simple let people choke on the amount of crap they'd get if follow would actually mean get everything - they may learn not to like things without thinking.
But not sending notifications at all? Why? Why is that good?
$friendINeverTalkTo has commented on $PostByPageIDontFollow is NOT something I want to get notified about. Neither is
XYZ has uploaded a photo
And should you really not like some content, the solution is unfriending the poster, rather than simply matching against that type of content (political, religious, etc).
The fact there isn't a private dislike button (that no one sees you clicked other than Facebook), is remarkable at this point. It's either woefully obtuse, or intentional so that a feed of false positives better hides moderately targeted ads.
It's to the point now where I just log out so I can see what's available rather than a continuous feedback of what I already watched.
That solution doesn't require defriending or unliking a page. Facebook uses this feedback when recommending content
I vote the last option. I think there was an explanation, basically saying it would make people upset or sad and they wanted to avoid that. The solution wound up being a variety of emotions to pick from (which counts more than a simple like). They included negatives like "angry" or "sad".... I'd still just rather have a simple dislike button.
...note that I can choose to dislike the post itself, or hide the entire source or app if I prefer.
Of course the downsides are that you risk disappointing someone who obsessively checks their friend count on their birthday, and that it only really works if you log in every day.
I have a private Facebook account with the people I personally know, and a Facebook page to publish things for my blog. And, oh boy, where do I start with my complaints!
1. So, the problem of the echo chamber is still there. If I look at the last 30 things I've posted, it's always the same group of people that interact with it. One of them starts, next ones follow, and usually, once those 10 people see it, the interactions end. At the end, it reaches some 400 people according to Facebook, out of which ~40 people click on it, and 10 regular people interact with it. Sometimes my posts get shared by multiple other Facebook pages, ranging from hundreds to six thousands likes. It makes difference for the Facebook "reach", but not on the clicks nor the interactions (maybe there's one lone wolf who decided to follow it up and like it).
2. I'm still being stuck in that "tax" that he mentions within the introduction. Everything on Facebook needs to be hand curated, needs to be thoughtfully thought out, and needs to be published within the Facebook platform. For my last two articles, I wanted to reach as many people who like my Facebook page as possible. This was not achievable by any sort of sharing from my website and I had to either pay money or post it as a Facebook "Note". I've decided on the latter, since the project articles were about was already hurting me financially.
3. Instability. Whenever I open my Facebook page, my browser is either really close to crashing, crashes, or simply doesn't load the page properly. It doesn't happen anywhere else on Facebook, exclusively on my own page. Just now, I wanted to scroll down to see the reach of my latest two page updates, all posted within the last six hours. Took me three minutes and multiple refreshes. Same thing happens with every other browser I've tried to use.
4. Oh my god, the spam! Every single Facebook update I post, there's something in my notifications. Either it is to boost the post, or is it more popular than X% of my previous posts, or my page had X views, Y likes, and Z shares, or some random guy is somehow ranked higher than the rest of the likes enough that his like deserves his own notification. Like, whatever I post, there's some bullshit notification that I don't want to see, and there is no option for me to fine tune it.
I have buckets of hate for this. And they seem to be obvious bullshit. Though, you forgot the infamous, "you haven't posted in 3 days! People want to hear from you!" ... which truly isn't true nor possible. My page is an art page, and I don't have new content twice a week.
Oh, and I forgot to even mention the "boost it for free!" that I get from time to time, even though it's not really free, it just says so on the notification. A friend of mine fell for that and was charged. I clicked on it and, luckily, I didn't enter my credit card data in the past, so I was instantly suspicious when they asked for it.
And oh my... though, I'm now curious if he happened to notice any actual results from it or if the negative reports still hold true.
This won't happen because facebook wants you to think everyone is seeing your posts. People would be livid if they knew just how low of a % of friends were actually viewing their stuff due to algorithmic meddling.
For all we know, the "mom" factor isn't weighted for things like external links. The OP is making that suggestion, but perhaps Facebook thought of that and discounts the relationship if it's (a) an external link or (b) the family member likes more than X% of posts.
If moms auto-like every post, then how is that a relevant signal? Everyone has a mom. That would mean every post is getting penalized in the same way (which effectively means no posts are getting penalized).
And if circumventing this was as simple as excluding his mom, wouldn't the effect be even greater if he excluded all non-technical friends and family?
Which pretty much just means you're posting this for the greater public, which presumably a lot of users of Facebook's API already do. Since his intention is for his content to be seen by the greater public, then... go ahead and tell the API that?
It's a great angle for an article, and it's very shareable, but he provides no data (even though he seems like someone who would have all of the data).
I know this is only another anecdote and real data would be preferable. But I'm convinced that the described behavior absolutely happens.
The wrong way to do it would be to show it to 5 people and as soon as his mom likes the content, decide the content won't be interesting to the other 4 people they showed it to. I haven't seen any evidence of that happening, and I don't think the article points to any (anecdotal or otherwise).
He should try alternating the privacy settings with each post.
Question: "Why does XX do things this way?"
Answer: "Because it increases engagement."
Why does the Facebook algorithm do this? Because it increases clicks.
Why does Youtube use autoplay? Because it increases watch time.
For every single social media site.
Similar to TF/IDF, where you mitigate common words by dividing by their overall frequency, you should be able to divide the weight of any particular "like" by the frequency of likes between the two people. That way a genuine expression of interest by an acquaintance is weighted far higher than a relative or close friend that reflexively likes everything you post.
I think the bigger issue is family members, friends and relatives who do cut out their non-fb using closed ones by ignoring all other methods of telecommunication. "Oh, you didn't knew we planned a wedding, too bad you're not on fb!"