Try UI before AI. Purely chronological will suffer the spam of frequent-posting friends. Group their posts, which can be flicked through as a cluster.
All in all, am I the only one that thinks that the current fashion in page layout squanders real estate? Make the news feed a huge grid, with each post smaller, like one of those photo walls in a hip restaurant.
1) I'm looking for something specific.
2) I'd rather not be surprised right now.
3) I'd like to get through the search/index as fast as possible to get where I'm going.
The above are user-voiced versions of principles that drive traditional UI, and are absolutely better met by that optimization process.
AI-driven search and feeds were exciting because they were the first method that captured use cases where users wouldn't make those statements: fuzzy queries about general interests, the desire for diversions "curated" by your own social group, and search/feed review as itself as an entertaining experience.
Then, it turned out that these unexplored use cases were actually the giant, underwater part of the iceburg of what people wanted from information systems. The growth and attention capture you can drive from meeting those cases is not only massive, it sucks the oxygen out of traditional value propositions by out-competing for the same attention.
I think a return to traditional "capture and facilitate intent" UI (or even better a synthesis of the two!) will only be driven by users finding a way to explicitly value their own clarity of attention. We see signs of this in everything from ad blockers to focus-oriented browser plug-ins, but I think we're still missing the FOMO-conquering expression of this value proposition that becomes a competitive product.
Nothing frustrates me more than when software thinks it knows better than me.
Would it be fair to rephrase that as "... and fails"?
Because that's what we're really talking about, right? Those times when the smart system failed to correctly predict user intent, but lacked a fallback method for the user to clarify?
I'm sure there's an uncanny-valley-esque term out there for the difficulty "99% correct" smart systems have when users hit that 1% of incorrectness.
> Would it be fair to rephrase that as "... and fails"?
Max I borrow that phrase for all future references from UX colleagues regarding smart UI? I really hate smart UI as it changes while I keep expecting other things and constantly have to relearn or search for items/stuff that was there just a moment ago, but cannot be fount after a reload.
To make something "easy" requires the system to predict what you want. To make something "simple" requires the system to behave in obvious ways so the user can reason about how to make it do what they want.
I'll take "simple" over "easy" any day.
I don't know if it will hold up for other searches, but so far so good.
The searches had to do with real things as well as programming.
Just about all the results were for vegan activism, without a hint of it being a joke/phrase.
The top result when switching to "verbatim" mode was the joke, standalone, correcting "plants" to "vegetables". (Searching for the correct phrase in either mode got me to a zazzle page with a shirt identical to what I remember, but that's not really relevant to "use verbatim search!")
-has uses a weirdly named something
- and Google fuzzes it to something it has a million results for
I think this has happened twice, despite of me using both doublequotes and verbatim.
I've reported it and they've fixed it shortly afterwards but it is really annoying.
(Side note, your comment read to me like something Douglas Adams could have written about bureaucrats on a far away planet. That twist at the end that summarizes the entire issue in a few words. Well done!)
Yesterday, I was trying to find if what I was dealing with was a bug on this particular version of Photoshop.
Quoted CC 2018, version number and verbatim still showed Adobe forum posts from 2012.
Date range works, but you have to find the public release date.
By that time I had finished installing the previous version already.
I miss Google’s simpler times, let me just grep the web.
(This is a serious suggestion. Do you really get anything out of Facebook?)
I like the town I am in. I don't wistfully wish I could go back to my hometown in Georgia, not do I wistfully wish I could somehow return to California and make that work.
/cranky oldster moment
I log in, check if close friends or family have sent me anything, then log out.
I'm trying to reduce the amount of time I spend on the site and having things in strict chronological order really helps. Scroll down till i see something I've seen before, and then I know it's time to quit.
That may be a wonderful thing if you're "friends" with a few hundred indiscriminate meme-posters, but if you're looking at a couple-three dozen relatively taciturn folk who are actually friends (though geographically and temporally dispersed), it misses out on a lot, and you have to scroll past things you've already seen to find things you've never seen before. Still, it's better than what FB gives you by default.
Point being it is really far from what one would think chronological means.
Self-driving car tech is also showing this lately.
Just updated my Google bookmark to https://www.google.com/?tbs=li:1, which seems to be the parameter to enable Verbatim.
That's how flickr deals with mass uploaders.
Anyone who's ever had a real personal assistant any good at the job would very quickly look at the best AI as the faint shadow of the real thing. And it's going to be that way for a while. Forever? Probably not, but maybe.
I'd argue that AI will sooner replace many legal and accounting jobs before it replaces good personal assistants. Just like how we thought chess is more difficult than walking and the exact opposite turned out to be true.
A longtime personal assistant who has your preferences really dialed in can probably come close. But given how much I sometimes end up fussing over even my own travel, it's really hard to see virtual assistants getting there anytime soon.
If they don't come close, they can safely filter out 90% ahead of time and present you a table of the remainder, saving you lots of time.
On the other hand, more likely that digital assistant was made by the booking company and is trying to upsell you with high margin items.
e.g sentimental analysis , fake news filter ...
Users can self-select. We can always just yell at Timmy who shares too much or outright unfollow him on Facebook if we don't want to see what they share.
I suggested that just because someone posts something on their Facebook wall, it doesn't mean I will see it. (This was around 2009). People often looked like I was saying something insulting. I think they'd understand things the other way around but I guess they don't see themselves as noise. There is some user education that the network needs to do.
In any case, be it Facebook or SharePoint or 37 signals, there should be zero obligation for anyone to look at things you post.
If you are having a baby shower and don't invite me personally and just write on your wall, that means you don't want me there.
I still believe in a 100% non-intelligent newsfeed. Ben Evans is wrong and I'd even say malicious for dismissing the idea of a reversed chronological newsfeed.
The only reason this is a problem is Facebook desires to inflate the number of connections. Let people cull their newsfeed and they will. Set expectations for posters that their posts may not get to everyone and they will understand as well.
We'd all benefit from a very different system which would structure things as we need it to, but that's sadly not what our economy incentivizes corporations to be right now.
I need an algorithm for that. There's no good way for me to say, "I don't want to see all the stupid shit she posts, but please show me important stuff" because I lack the ability to rigorously define "important stuff". And it's unreasonable to treat her unwillingness to personally text those updates to every single person she knows.
Ultimately, that line of thought ends with me developing my own algorithmic feed, which I don't really want to do and isn't scalable anyway.
As an example of deduced intent let's envision a system where every action I commit over the course of a year can be observed and recorded for inference analysis to determine if I am evil or good. Assume that in all ways I meet the standard definition of good... I help people across the street, save puppies, etc. At the end of the year the BEST that you can conclude is that I behaved ethically (good actions), but that says nothing about MY intentions. I may in fact be evil and am only ingratiating myself within a community in order to later kill them all. There is a reason that within philosophy that morality and ethics are separate words. There is a reason that when viewing demographic data people say "I can tell you what 9 out of 10 people in the group prefer, but I can say nothing about any one individual within it."
As to the preference between inferred vs explicit intent, you're placing your own judgement of the value of inference vs explicit above that of the user which will inevitably lead to frustrations on the part of the user. In the simple case of a catalog there are two distinct intent patterns that users engage in... one is the specific intent to find a known product and the other is what I refer to as "discovery." Discovery is, as the word suggests, finding things that the user did not know existed or knew very little about.
News feed prediction behaves the opposite of this. It tries to predict your intent based on past behavior, but that is, as you say, not possible, and could likely be why the news feed is dying.
YouTube's UI prominently features search. So if my interests change and I search for something I've never been interested before, YouTube can show me a pretty good selection of videos and tune its recommendations for future viewing.
I will say, though, that all social networks seem to fail at weighting past behavior too strongly. Whenever I start a new account on a social network, I find it's recommendations are much more tuned to my interests, and those recommendations become less relevant overtime.
Biologically, I'm a new person every seven years. As a collection of varied interests, it seems like I turnover much more often than that - and I suspect the rate of turnover is different for everyone.
For me, FB is far too controlling. So much so that I have 10 groups that I've started to "organize" my friends and in a way build my own feed. It helps. But it still not what I want, mainly because a lot of people are not group share minded.
I don't understand why FB is spending soooo much time and effort trying to guess what I want when all it has to do is...wait for it...ask.
A recommendation/suggestion system should offer its opinions and the user is free to follow them or not. Replacing explicit user intent with inferred intent is over-ruling the user, saying that the system's decision-making is more important than the user's.
At the forest level most people are predictable, often to a fault.
This is obvious enough. But if I tell you I would like to lose weight and improve my diet, yet you, knowing I have a sweet-tooth, opt to keep offering tempting sweets and desserts anyway, that would make you kind of a jerk wouldn't it?
I wouldn't want to spend much time around someone who actively tries to undermine my attempts at personal growth or improvement.
No, it means you aren't actually doing what you say you want to do. If you however gave the system input about wanting to lose weight, say by buying weight loss supplements, exercise clothes etc... then the "system" will adjust to give you what you want.
Your example is basically akin to a scenario where you frequently order dessert at your favorite restaurant, then decide you want to lose weight, never tell anyone you want to lose weight, then get upset when the server brings you the dessert menu.
Except for the part where he said "if I tell you I would like to lose weight and improve my diet"? So not like the "never tell anyone" part at all?
What he's describing is more like a system which learns "most people who research and buy diet stuff end up failing and going back to the ice cream and junk food, so let's start aggressively targeting them with unhealthy crap once they start looking up diets."
That may be accurate regarding behavior and revenue maximization, but it's also pretty shitty.
Your personal bias is that Family > Heroin. In this example that seems to also be the social consensus (which I hold as well) however that doesn't mean that the individually must certainly value family over heroin.
Your actions are directly tied to your preferences - whether optimal or not. If you neglect your family because of heroin, it's because at some point your reasoning system decided that the hit was more important than the relationship. You can argue that preferences can be hijacked by chemicals, but in that case it becomes a reductionist determinism argument - that can be easily argued down to biochemistry in either direction.
It's too easy to fall into the "common sense" trap when trying to evaluate human behavior.
Well, yes, exactly :)
Maybe this is just a phrasing error, however the idea of inferred behavior doesn't make sense. If you view an action that is measurable (buy x, go to y) then it is an explicit behavior.
Further, the concept you seem to be arguing against is the economic concept of revealed preferences. If I understand your argument, it is that it is impossible to derive a users preference from observing the users behavior. This is contrary to decades of empirical research in economics and cognitive science proving that you can rank preferences based on such behavioral profiling.
Your example builds a strawman by introducing a value judgement into the example (evil vs good) that is unmeasurable. It conflates some kind of moral state with a series of actions without any way to empirically value them.
The reality is that most people cannot accurately state what their true preferences actually are, and thus your actions reveal more about your future behavior than your "intentions" do. This is a classic debate in philosophy of the mind.
I'm not arguing against revealed preferences as useful aggregate information regarding the probabilities of someone choosing X, I'm saying that you both can't know their intentions regarding those choices and more specifically those preferences are untrustworthy in predicting any single opportunity to choose X.
The idiocy of many current products, be they physical or informational, is replacing the user's ability to make a choice with what the product designers believe... especially without explicit agreement. In most cases the user didn't opt-in to outsourcing or subordinating their decision-making to product X and every time their intent conflicts with the product's decision-making the user will become more discontented with the product.
It's simply that current recommendation/prediction systems have incomplete information. In this case an explicit goal or preference is not stated by the users. In the absence of explicit preference statements, a recommendation system can only act based on the behaviors it measures.
The purpose of my life's work is to align revealed preferences with stated preferences, to determine optimal action. The majority of systems are confounded because 1. They don't ask for preferences and 2. People state preferences (signaling) that are not aligned with their true beliefs.
Your work, at least as you present it, does not seem to collide with that statement because it asks the user for intent and then examines behaviors to see how they align with that intent... there is no inferring trap. Determining optimal action, while its own complex kettle of fish, is only a concern if it over-rules the user's decision-making WITHOUT the user opting in to such an arrangement.
There's no 'mistake in viewpoint or assumption' involved in Simpson's Paradox.
Your example story was of someone committing 'a sequence of good deeds throughout the year' that may be part of a grander, longer plan to commit a profoundly not good deed.
Analysis of month by month, grouped or in isolation, would reveal an apparent trend that would be opposite to the evident overall trend once you include the ultimate, negative act.
From the wikipedia link I provided earlier:
"Simpson's paradox ... is a phenomenon in probability and statistics, in which a trend appears in several different groups of data but disappears or reverses when these groups are combined."
Fundamentally, no. There may be practical difference between specific machines and people.
> As humans, we may be able to pick up subtle cues that machines simply lack the ability for.
Depending on the machine, the reverse could be true.
But now we're back to the problem of deducing intent. Only now we have to deduce the intent of the human who deduces intent. (Back to the problem of "who watches the watchers?")
> It ought to be able to work out who your close friends are, and what kinds of things you normally click on, surely?
I wish this didn't even seem right to anyone, even from the beginning. It's different whether I happen to click or be interested in something once than it being what I want to see again. If I go to a movie, that's not evidence that it's my sort of movie — I don't know if I'll like it until I see it! Priority really has to be on the evaluation of things and the explicit desire to get more, not on reading things into the fact that I was open to something in the first place.
With the what-I-click approach, one day I am prompted for some reason to click one sort of thing… then I'm shown that more the next day… then one pretty-arbitrary starting point is turned into a defining filter for me forever. This reality means that simply clicking something out of curiosity threatens to define you and your experience for years.
The ethical way to handle this is to do some mix of (A) giving back control to the readers for what they explicitly choose to follow (not whatever they happen to click, and not even what they "like" because liking should not equal following) and (B) doing the opposite of bubbling and actively insert some mix of stuff-they-don't-usually-click, i.e. novelty so that people are actually exposed to new ideas and perspectives they might otherwise not even know exist.
I feel like I've seen this behavior on other sites and systems as well. I've no doubt the prioritization initially works, but it seems there are other factors at play that seem to bring up old data.
In my case my team is not nearly as popular as Nebraska so I suspect the logic very roughly ends up something like "Ok he watches college football and oh hey this Nebraska story is really popular and stories about that team are big now so here ya go..."
I spent a year slowly cultivating a pretty decent feed of relevant content on my Pixel, and it went full Netflix 2010 seemingly overnight.
Early Netflix suggestions, no matter how selective and consistent your selections over many years, could be instantly subverted by your lonely sister-in-law getting on your account one Saturday night and watching a few foreign language romance flicks. After that, you'd never really stop getting recommendations for "movies starring sexually aggressive male leads" or "films with actors who look like Antonio Banderas".
I think I'm done being used to train everyone's algorithms.
What's wrong with Netflix anyway, that one off-topic movie can sway your 'preferences' so drastically?
Hal 9000 failed because he was told to lie.... You'd think that prioritization would be easy on the outside. Granted it was a story but it seems to fit that a minor change results in a collapse as far as function goes.
It's one thing when something you don't care about keeps showing up. It's another to think, "I'm actually interested, but if I click this once, then I'm liable to effectively commit to this staking a permanent place in the half-interesting things that fill my life from here on!".
Unfortunately in reality that doesn’t seem to be the case. Last week I bought some wireless headphones, and now my feed shows a bunch of those. It also keeps showing me books by an author who I bought a series from, but never finished (on Kindle). And a selection of Spanish books, that my wife needed once for her studies. And for some reason a load of large kitchen applicances.
In my experience, it produces better and more pleasing recommended content in it's feed due to this control.
Inference models such as YouTubes always feels like you are being pushed into certain content and actually makes me personally hesitate from sampling / clicking on certain videos.
In the YouTube model, simply sampling a single video starts up the recommendation engine to keep pushing similar videos and as a user you feel railroaded into being profiled.
And thanks for the note about Apple News. I must admit that having left Apple over their walled-garden iOS direction (turning to embrace of GNU etc.), they have stuck to a higher-road in a lot of regards compared to the other big companies. They're still nowhere near as awful as Google, Facebook, Microsoft… if only Apple weren't sabotaging copyleft and hadn't pioneered the in-app advertising, sales, and tracking that are actually so much of what's wrong…
I don't want to see some random cat video Sarah thought was cute. I can go to /r/kittens myself if I want to see random cute cat videos. I do want to see the video Jen took of her cat because I know him and sometimes feed him when she's out of town.
I think I'm not alone in this, and some of the popularity of Instagram is due to its UX making it easy to upload content directly from a mobile device and not so convenient to share content not created by the user.
I've used the FBPurity browser extension to create a nearly share-free Facebook feed. It reduced the amount of content by 80+% and greatly improved the signal to noise ratio.
Again, if you were able to control exactly what you wanted to see with a control panel - that's a simple thing to implement, but that's not what Facebook does/is. It's an advertising platform. "social networking" is double speak.
It might, however be a compelling feature for whatever displaces Facebook to offer. It would require a business model that's not identical to Facebook's.
I've been manually blocking page after page after page ... every time something gets through, and I think to myself, "ugh, this is stupid", I "hide everything" from that page in the hopes that I'll never see it again (and yet, "LAD Bible" keeps coming back into my feed!).
As you say, I much prefer to see original content and pics posted by people ... in fact, even if it's a link that's posted (and not re-shared from another page), it's usually interesting because they took the time to actually copy/paste that link from a website, so chances are they've actually read the content.
I agree, and would be fine with including that sort of thing, especially with a UX that encouraged writing something about the link. A text post including a URL is still something my friend wrote.
A full page grid of posts doesn't solve the problem, if anything it makes it worse. You'd lose visual hierarchy and no grouping, making posts that are mostly red or bright get more eyeball time than the rest.
I think a more natural control over your own News Feed without going through a screen of dials and buttons could be the way to do it. Simple filters at the top, similar to what the Gmail inbox introduced. Funny/Sad/Deep/etc. based on the content and the response from other people (like/angry/sad/etc.) And like Gmail inbox filter, use it or ignore the filtering. You still need some smarts to nuke down the every-day spammers. And sometimes chronology isn't the best order, but it's not too hard to work out when a big-deal occurs; e.g. someone posted 12 pictures of their wedding ceremony, 5 hours ago, and their feed blew up, making it more important than someone's best ramen noodles they've ever had 15 mins ago.
I'm honestly starting to believe that it is. These are just random on-the-spot ideas, but I want to start seeing more innovative and different ideas. For example, if you want to optimize for getting news, let's see some innovation in journalism. If you want to optimize for being entertained, let's see some innovation in curated experiences. If you want to optimize for keeping in touch and connecting folks, let's do something akin to a CRM-for-friends ... where you still share all the stuff, but instead of showing you a feed, the site/app would just give you targeted reminders to reach out to folks you hadn't connected with in X amount of time.
Again, these are just random ideas off the cuff ... but this whole "wisdom of the crowds" thing we need to rethink, because it's too easy to manipulate by any number of actors in the system.
I believe promoted tweets work in a similar way. If a platform provides any sort of control over newsfeed it looses the leverage to show you what it wants you to see.
This, as you highlight, probably brings in the big bucks. This makes me think Google had something similar in mind when they "personalized" the search results (a wildly unpopular move), going to show that when you are the product, there may be no UX too unpleasant to sink to.
The upside of this is that these incremental UX sacrifices leave the market open for an alternative, either of the same ad-driven kind or, hopefully, of the more sustainable kind.
Hey, a man can dream.
A usable UI (for non-niche sites) is one of those things you can get nostalgic about with "back in my day...". It's not coming back as long as advertisers pay for things.
Providing user control (even limited) over the newsfeed actually increases leverage with regards to long-term revenue, since it's an expression of user intent... which is a clear datum for targeting. What it dampens is the kind of "run of site" or "simple demographic" advertising that sites often accept because it's "found money!" Except that in the long-term it isn't, due to inattention/abandonment problem mentioned earlier.
The reason for the shift to an algorithm is that FB doesn't want to give you any incentive to unfriend or unfollow anyone, because that reduces FB's ability to maximize your engagement. They want to be in control, not you.
If FB just let chronological newsfeeds get out of control, we'd have no choice but to manage it ourselves avoiding this entire problem. It would cost us a little time upfront but save hours in the long run and result in a healthier, less algorithmic experience.
The easiest way to judge what FB believes is in their interest is to compare various actions by their ease of execution. For example, "liking" a post is one click and instant. Unfollowing is 2-3 clicks into a hidden menu at the least. Not to mention that it has an artificial 1-2 second delay built in that kind of makes you "pay" for the action.
Remember old Facebook? The MySpace ripoff with a nice design?
Your homepage populated with widgets? That can easily cope with 200 friends. You'd just have separate widgets for different types of events. One widget for a feed of genuine status updates, one for friends' shares, one for likes etc.
Mobile first destroys that. You've only got space for one list.
FWIW, I don't even have Facebook installed on my phone. My most frequently used apps are Firefox, Signal & Inbox (in roughly that order), followed probably by Maps, Google Play Music, KeePass & Authenticator. For everything else, there's my desktop or my laptop.
The everything-nature of the feed itself is the problem. It is the source of its own weakness.
A source of information too dense to be browsed by a normal human was created, forced as the only way to interact with the product and then automatically curated to 'fix' the issue that was created by having a firehouse of updates.
It's certainly possible to overload people with settings, but I don't see why we don't see a choice of a few very different feed filters to choose from. It's not like any single algorithm can be what everyone wants.
This seems more important than more photo filters, don't you think?
I think there are a couple of important takeaways from it. First Facebook wasn't just doing that out of academic curiosity. They're manipulating their users to maximize their benefit. And the more important point is that that behavior is, all but certainly, something most of every company that also relies on advertising is engaging in. Letting you see what you want to see is not their goal, except if that coincidentally coincides with you being more likely to stay on their service and provide more personal information to them.
And we might intuitively expect these two goals to be aligned, but this is improbable. Consider casinos. It's not wins that keep people gambling - almost nobody wins in the longrun in most games. Instead it's a complex system of addiction including near wins, and visual-audio manipulation that keeps people dumping their money in the machines. And so too with Facebook, it's that small rush of interaction, likes, and so on that also keeps people 'playing'.
A casino in which you ALWAYS lost, despite a constant "near win!" incentive and all the other manipulations would quickly be empty.
it's not like crazy-amounts-of-posting is some inexorable law; it's a response to the social networks as they now exist.
even for too much content to read, a time-based sample is arguably still superior an algorithmic one, because it's _unbiased_. this gives you (1) an accurate impression of mood and content of your timeline and (2) completely short-circuits problems with soft-censorship and manipulative influence. (fb has done various vaguely-evil experiments w/ controlling the emotional content of your timeline one way or the other.)
lastly, there's an easy fix that gives you best-of-both worlds: simply allow tagging some sets of people as "close friends" (or any other label!) and allow filtering posts by that label.
(and, sure, if you want to, go ahead and include an algorithmic timeline view. or even algorithmic w/ various different criteria for inclusion. the fact that fb/instagram/twitter won't even give you the option of chronological ordering suggests that it's more about maintaining control than about creating a good user experience. /tinfoilhat)
The problem is that I open the app, see something interesting (like a friend announcing a baby,) I leave a comment and swipe back to my home screen. I tell someone around me then they want to see and so I reopen the app and that post has been obliterated from my timeline, because it already extracted value from me with the like and comment. And this is literally within 5 seconds of leaving the app (Not even killing it, just going to the home screen.)
The true reason for the algorithmic timeline is just to increase engagement, that is all. With a timeline based feed you know where you left off, and you know how far you have to scroll to get back there. With an arbitrarily organized feed you just keep scrolling, thinking you will find something interesting or new. It is the same reason why casino's have no windows and are always the same lighting - so you stay longer and forget about time.
Facebook has this feature. You can tag someone as a close friend, and it will show you posts by close friends first.
(i havent tried recently, tho.)
But then Google+ failed, and we never got to find out whether Circles worked or not
I also remember this slide deck from some hip product guy who pitched circles.. 150 slide hot mess.
In real life, you get "circles" for free! Whoever you are with in the current moment is your circle, and only they have access to what you "share" during that time period.
There is some leaking through the grapevine, but we have an intution for that and usually handle it decently well. It's often even desired.
And the whole reason Facebook et al are popular is that in real life it's really hard to get all the people you want in the right "circles" (i.e. places) at the right times.
You can send a group message, which I suppose can sort of function like a circle, especially if it's long lived. But it would hard to manage many of those.
Instead we really should use NLP to automatically categorize contents.
e.g. I wanna make my 50% for seriously thought-provoking articles, 20% for shitposts, 20% for news and 10% for memes.
Every social network now has asymmetric following, but it's all or nothing ... I wish I could follow someone's tech posts, and another person's kid posts, and yet another person's political posts ... etc etc
I was pretty excited when I first found out about Google+, but it was similar in excitement to discovering Facebook and also Fitocrac, yet all of that meant little upon the realisation that I wasn't truly getting value out of any of them.
For a long time, I was in denial about Facebook being bad for me (I thought it was everyone else who was using it wrong) but these days I can no longer justify it in terms of productivity, benefit, etc.
Simply, I need to read more books. No social network is going to help me with that; not even Goodreads (or LibraryThing - anyone remember that?).
So rather than creating and putting people into a “Summer Trip 2018” group, you’d create a “Summmer Trip 2018” topic that you share things to, and your friends would subscribe if interested. If you frequently post about food, you could create a food topic, and people would subscribe if interested.
As we all know, people with 300 or even 500 friends have nowhere newer 1500 posts a day. Sure, some friends post 10 memes a day, but most people do not post anything at all. In any case, people were able to manage the chronological feed without any problems. I did not hear anyone complaining that they had to deal with too many posts, and managing the feed wasn't an issue.
What was an issue, however, was that Facebook needed to make more money. So they created an algorithmic feed that pushed posts that engaged people more (videos and images) and downgraded external links (to keep people on the platform).
They lied about the 1500 posts a day, and journalists passed it on, and Facebook made more money. All at the users' expense.
This is clearly nonsense. Why do people’s “likes” end up in the feed in the first place? When people say they want a reverse chronological feed they mean of actual posts, not every possible activity. And everyone knows it, too.
Actually, the fact that your friends' likes show up in the Facebook newsfeed is actively discouraging me from liking anything. I do not want my friends to see what posts I like.
Before my deactivation, I thought about those in my list whom I wanted to see or communicate with and to be honest, I couldn't think of that many in proportion to all those I mostly added a decade ago. Frankly, I found the "news" to be more interesting than most "personal" posts.
I also didn't want new Friends to be able to easily trawl through my no-longer-relevant past Facebook life. Upon realising there was no easy way to control the privacy settings for that and in frustration, I started the process of deleting Facebook. (Not many realise this, but Messenger will still work while an account is deactivated.)
It's simply too hard to create a graduated list of different "Friends" on Facebook (or Google+). While their presented mission might be to "connect people", real life doesn't work that way. Coupled with an encouragement of oversharing by design, it's a recipe for not a very good website to spend time on.
I don't know of anyone within my own friends group who has gotten successful from using more social media. I also don't plan to be the first, not when other more lucrative niches exist.
Then you either have a VERY big family or a VERY broad definition of “close friends”.
There is this idea that for it to be a "proper" wedding, you have to host a formal party and provide for each and everyone that has had an impact on your life. It doesn't have to be this way, but if one chose to do so, then the implications is the planning and cost skyrocket.
In terms of social media, I might be the one that a little outside the norm. I have at most 150 friends, family members and acquaintances on Facebook. Maybe 10 of them most the majority of content. If people but nothing but links, games or only share stuff, I just unfollow them. I only care about the stuff that's actually going on in people lives, that is an increasingly small subset of all the Facebook posts.
My Facebook feed is curated, by me, so that I can browse everything that happened in the last week in 10 minutes at the most. So to me the articles claim that wanting chronological order is pointless doesn't make much sense to me.
Facebook just doesn't seem to encourage this kind of usage, there simply isn't enough ad money in it.
Add in significant others, then double again for bride and groom.
That's 100 people. Sure there's lots of overlap, so it wouldn't actually be 100, but I know I personally have more than 25 people I want at my wedding that aren't friends/family of my fiancee.
All total, we had 23 people, counting the caterer.
On this algorithmic side Apple and Spotify are leading the way. For example, Apple Music solves this problems by grouping feeds. Favourites Mix, Chill Mix, New Music Mix, Friends are Listening To, Radio.
That algorithm knows me very well, and it's able to delight me with new music every week.
Ability to craft a mood or tell a story, relate to current events, culture.
The other is few humans get to do that directly now, and the experience is filtered through algorithms, and then cloned and shuffled.
Takeaway: "cool", relevant, "connected" come from minds, not rules, or limited AI in use today.
People are supreme at discerning the intent of other, through nearly any medium. Think prisoner tapping on the wall.
The more diluted, distant, muffled that intent is, the less compelling the expression of it is.
We quite simply get a lesser sense of the mind.
Think original painting vs a print.
Or live DJ making an evening, connecting with, relating to, and me sharing their local scene directly. By that I mean picking tracks they want, for human reasons, talking for their reasons, not delivering liners, even pausing for a few seconds to let something awesome sink in. Couple that with phones, social media, and it's a compelling experience, given that DJ understands and has mastery of their scene and can express it.
Now, that same DJ, voice tracked all over the nation, picking from a curated, "playola" list of "researched" tunes, and delivered via automation, play lists, etc...
The former can be pure magic. The latter ranging from pleasant to barely tolerable.
That is what is missing.
How many big companies have internal playlists? And if they do, what do people do, how do the tunes get on there and why do they matter?
Great questions worth really thinking about in human terms.
An example might be, say me making a music recommendation to a member with access to the list.
It starts with a quick exchange via SMS, email, whatever, "man, I'm waist deep in this driver code and just can't focus."
"Feel you brother / sister, give these guys a go, they are my goto, for low level assembly language programming. Give them 5 minutes, and you are there, in flow."
(ozric tentacles, would plug in here, for a music selection)
Later, "Man, they made a bazillion albums! This is great!"
On the company play list later on, "Lose flow? A buddy sent me these guys, thank him later, enjoy!"
Later still, someone tags selects a few tunes for flow, trance, whatever...
At each layer, the human spark of relevancy, meaning, context gets diluted, until it's just another data point tag like all the other tags.
Going back to the evening DJ.
Making a set, live, maybe lightly prepped for Friday the 4th isn't the same as a Friday goto set for summer, which isn't the same as researched, can't miss hits a program selects from on Fridays.
At each layer here the human to human contact is diluted, more distant, less expressive, less relevant, you get the idea.
Lastly, a human playlist programmer is one step removed from a human spinning tunes directly. But it's good, in that streaming eliminated the need for massive, broad generalization common to, say syndicated radio.
Not all expression forms can be done play list programmer style, but the basics can. Mood, time, feel, general matchup people can identify with cone across just fine, depending on how directly the play list is able to be specified.
People like and identify with those.
To connect, and get the real human mind to mind does require a person with considerable agency, perception, creativity, and talent.
Mass broadcast, as deregulation allowed it all to scale, diluted this. Secondly, the need for humans inhibited the ability to trade on the value they create without cutting them in and or yielding editorial control.
The product is a shell of what can be done, but it scales, can be reproduced, bought, sold, managed editorially.
Those are the basic dynamics as I see them.
Over the course of years you repeat this many times, until one day when Jeeves decides not to suggest but instead to simply pack your jeans and a bowling shirt. When you arrive at your destination and open your bag you are aghast and call your butler. "Good God Jeeves! I specifically asked you for a tuxedo to attend the baseball game and you packed me this slovenly casual wear!" Jeeves, in his defense, responds "but sir, for YEARS you've asked for a tuxedo to attend baseball games and I have suggested casual wear, and you have ALWAYS chosen the casual wear!" "True enough Jeeves, you presumptuous butler, but tonight I am singing the national anthem at the game! Without formal attire I shall be reduced to ruin! You're fired!"
Too many products have opted to be the presumptuous Jeeves, without even bothering to spend the years our poor butler did learning the eccentricities of his employer. And yet even with that dedication, poor Jeeves was fired because he was confident he could infer the intentions of his employer. Whether it was the arrogance of Jeeves presuming to know better than his employer, the ignorance of Jeeves in understanding that he lacked crucial information, or that Jeeves suffered under the delusion that past performance predicted future results matters not a whit... because Jeeves has been sacked.
Jeeves may find employment elsewhere but if he does not mend his ways he will eventually find himself unwanted and unemployable due entirely to his own faults.
The mistake you imagine is exactly the kind of thing competent human assistants are good at avoiding and AI is terrible at.
Competent human assistants know to negotiate formally or informally with their employers the limits of their authority. Unfortunately those creating products either do not know or do not care to negotiate clarity on such limits, and end up like poor Jeeves.
I did not opt-in to the agreement.
If you think humans are going to work like that, you better never hire anyone.
I created a tool  that lets users aggregate top posts from selected subreddits and receive a daily email summary, but I think there can be more done in this space. Would also like a better discovery mechanism for things that I may be interested in that aren't obviously connected to current content (e.g. someone interested in programming may be interested in art as well, if fed nice introduction to that information/scene, but current algos just stick to known interests).
A user could be able to specify "I want 50% of my newsfeed to be from immediate family, 35% from close friends, 15% from acqaintances." Use a pie chart slider bar to make it clear. Then let the algorithm figure out how to interpret that chart.
Orkut didn't have the concept of a news feed. It had just communities, which were thousands of micro forums people joined and were displayed on their profiles; and scrapbooks, which were walls of text messages -- people could post on others' scrapbooks and on their own, so you used it to both message people and "broadcast" content about yourself, but that wasn't shown to anyone who hadn't voluntarily visited that scrapbook page.
Instead of having an algorithmic feed that tunes an opaque optimization problem, we should harness the power of AI/ML and optimize for what the user explicitly wants - whether it be learning, family, or sports commentary.
In the early days, people posted rather lengthy articles/editorials about what they were doing, thinking, feeling, who they were, etc. On Livejournal, then on blogs, even in the early days of Facebook. But those are mostly gone now, reduced to tweet-sized chunks and making up only a fraction of the feed. Instead, people mostly just reshare, reblog, retweet, copy memes, add links, sometimes photos, checkin somewhere, etc. A newsletter with all of that filtered out, and only the actual content posts would usually be empty.
But if someone could start something like that and start a trend of people actually communicating on a large scale, and making it as digest/newspaper style, that could be pretty interesting.
You could have your own AI bot that reads your newsfeeds for you and tells you only what you might be interested in. The AI is tuned to your own preferences, not the social media provider's.
Has anyone done anything like this already?
I don't expect a newsfeed to be an exact representation of everything that I'm going to find interesting, nor can it guarantee that it is never going to miss anything - the best that Facebook is ever going to do is a 'best guess'.
A good analogy might be my Spotify Discover Weekly playlist - it is pretty good, but it isn't a complete encapsulation of my musical tastes, and a lot more manageable than a chronological representation of everything that has been released that week.
What Facebook has always been missing is a low friction, invisible way to say "show less of this kind of thing", without it seeming like I am unfollowing that person, or the social baggage of it being a 'Dislike' button.
If I need any more granularity than that, it's probably more worth me contacting the person directly.
False flag. The Feed is focused on engagement. Period. There is no right or wrong. Just engagement.
No one can know what the possible feed is, let alone what their ideal feed should be. Let's face it, that's going change from day to day. We're humans. It's what we do.
As long as the long arc keeps you coming back THAT is all FB is concerned about.
Different people have different standards for what they call friends.
> There are lots of incentives for people (Russians, game developers) to try to manipulate the feed.
Isn’t this what the notifications on FB are for?
The last thing we need is yet another standard; we need people to utilize the existing ones, like activitypub, webmentions, websub, micropub. And grouping is a client task.
This is not a tech problem in 2018.
Facebook founder and CEO Mark Zuckerberg has responded personally, saying "Calm down. Breathe. We hear you." and "We didn't take away any privacy options."
It's not possible in iMessage - with end-to-end encryption, Apple has no idea what you're sharing."
[ There is no technicological reason that one company, or any company, needs to have control over the management of these "groups" and the "messaging" that may occur within them.
Two "groups" communicating over internet need not be connected in any way, bridging them is an option that should be left for the user. For example, each group is reachable only via a separate network interface. Not "networks" or "interfaces" as defined by marketing copy and buzzwords, but traditional network interfaces defined by software. The type of interfaces displayed when a user types "ifconfig" and "ipconfig /all".
The interface used to "surf the web", the one that connects to the sewer of advertising that is todays web, is the not same one used to reach any group. Think of this like VLANs if you wish but its even simpler. I have been "beta-testing" such a system, which is not new, for many years. Theres no ads, no spam, no BS. It works. The "intelligence" of the network, so to speak, is at the edge, with the user.
Theres no money in this, necessarily. This isnt an elevator pitch. Its called "a private life". Everyone needs one. The whole point is to escape the commercial world for a little while. "Clean separation", like those network interfaces. ]
When I read these kind of articles I am always amused with which kind of problems your live is enriched when you are.
Not to mention the occasional article about how much your live improved after stop using FB.